Tuesday, March 18, 2014

26 miliar IoT saat ini



The Impending IT Headache of the 26 Billion-Thing Internet of Things

The rapid growth of interconnected devices making up the Internet of Things will wreak havoc on data security, storage, servers, networks and end user privacy, according to a new report.
There will be 26 billion "things" making up the Internet of Things within six years, according to a report released by Gartner. The implications for IT are profound — in particular for data center operations.
"IoT threatens to generate massive amounts of input data from sources that are globally distributed," said Joe Skorupa, vice president and distinguished analyst at Gartner, in a statement released to coincide with the report. "Transferring the entirety of that data to a single location for processing will not be technically and economically viable. The recent trend to centralize applications to reduce costs and increase security is incompatible with the IoT. Organizations will be forced to aggregate data in multiple distributed mini data centers where initial processing can occur. Relevant data will then be forwarded to a central site for additional processing."
He added that the effects will impact ore than just centralized applications. "The enormous number of devices, coupled with the sheer volume, velocity and structure of IoT data, creates challenges, particularly in the areas of security, data, storage management, servers and the data center network, as real-time business processes are at stake," he said. "Data center managers will need to deploy more forward-looking capacity management in these areas to be able to proactively meet the business priorities associated with IoT."
Significant implications noted in the report included:
  • Given the volume of data, comprehensive backups "will present potentially insoluble governance issues, such as network bandwidth and remote storage bandwidth, and capacity to back up all raw data is likely to be unaffordable";
  • This, in turn, will lead to the need for automated selective backups;
  • Availability requirements will continue to grow even as the IoT builds, "putting real-time business processes and, potentially, personal safety at risk";
  • The potential for breaches of individual privacy will increase.
Fabrizio Biscotti, research director at Gartner, said the advent of the Internet of Things will push IT further into virtualization and the cloud.

CI Manager Plus untuk UCS Monitoring

ManageEngine Launches CI Manager Plus to Simplify UCS Monitoring for Large Enterprises

Easy-to-Use Converged Infrastructure Software Monitors Cisco Unified Computing System, Raises Actionable Alarms, Instantly Sends Alarm Notifications via Email
  • Monitor Cisco UCS for fault and performance
  • View the hierarchy of UCS via intuitive 2D maps
  • View UCS infrastructure in 3D
  • View live demo at http://demo.cimanagerplus.com/
  • Visit ManageEngine at Cisco Live in booth 27
MELBOURNE, Australia and PLEASANTON, Calif. - March 18, 2014 - ManageEngine, the real-time IT management company, today announced the launch of its new converged infrastructure (CI) management software, CI Manager Plus. Available immediately, CI Manager Plus simplifies the Cisco Unified Computing System (UCS) monitoring tasks of data center administrators at large enterprises.
ManageEngine will be demonstrating the new application's features at Cisco Live, March 18-21, 2014, in Melbourne, Australia. At the show, ManageEngine will be in booth 27.
In a Cisco UCS environment, the real challenge is managing it. UCS Manager offers comprehensive UCS management, but it is not very user-friendly and has a very steep learning curve. It floods the admins with a lot of uncorrelated events. With CI Manager Plus, UCS management is easy and simple and eliminates the complexities involved with UCS Manager.
CI Manager Plus monitors Cisco UCS via UCS Manager XML APIs. It discovers the UCS and monitors all the devices in the system periodically. CI Manager Plus provides a 2D map of the UCS architecture to help visualize the parent-child relationship of all the devices in the system. This allows data center admins to drill down and identify the exact device that is causing the problem. CI Manager Plus also provides a 3D UCS builder that helps admins create exact replicas of their UCS infrastructures in 3D and embed them in the CI Manager Plus dashboard.
CI Manager Plus provides only the essential performance and fault data, thereby helping admins reduce the effort required to sift through the mountain of data generated by UCS Manager. CI Manager Plus also includes the best-in-class fault management module, which correlates all the related events raised by UCS Manager into meaningful alarms and uses color codes for differentiating the severity of the alarm. It also includes an email notification option to alert the admins immediately.
"Most large enterprises today want their data centers to be agile and energy efficient," said Bharani Kumar, marketing manager for CI Manager Plus at ManageEngine. "Converged infrastructure devices such as UCS let them quickly expand their data centers and, at the same time, go green. This trend will continue to grow, and a lot of large enterprises will adopt such converged infrastructure devices."
CI Manager Plus is built on OpManager, ManageEngine’s highly scalable, data center infrastructure management software that supports monitoring of 50,000 devices or 1 million interfaces from a single server. Data center admins seeking more visibility into their data center can convert CI Manager Plus into OpManager for network management, physical and virtual server monitoring, 3D data center visualization, workflow automation and more.

Pricing and Availability

CI Manager Plus is available for immediate download at http://www.manageengine.com/converged-infrastructure/download.html. Pricing starts at $995 USD for monitoring a single UCS. B-Series, C-Series and virtual server monitoring is priced at $1,995 USD for 50 devices.
For more information on CI Manager Plus, please visit http://www.manageengine.com/converged-infrastructure/index.html. For more information on ManageEngine, please visit http://buzz.manageengine.com/; follow the company blog at http://blogs.manageengine.com/, on Facebook athttp://www.facebook.com/ManageEngine and on Twitter at @ManageEngine.

About CI Manager Plus

CI Manager Plus is a Cisco UCS monitoring solution with 2D maps and advanced fault management modules. The 2D map of CI Manager Plus provides the complete hierarchy of a UCS with all the parent-child relationship mapped among its devices. Its advanced fault management module provides color-coded alarms for faults raised and instantly sends an alarm notification via email. For more information on CI Manager Plus, please visithttp://www.manageengine.com/converged-infrastructure/index.html.

About OpManager

ManageEngine OpManager is a network management platform that helps large enterprises, service providers and SMEs manage their data centers and IT infrastructure efficiently and cost effectively. Automated workflows, intelligent alerting engines, configurable discovery rules, and extendable templates enable IT teams to setup a 24x7 monitoring system within hours of installation. Do-it-yourself plug-ins extend the scope of management to include network change and configuration management and IP address management as well as monitoring of networks, applications, databases, virtualization and NetFlow-based bandwidth. For more information on ManageEngine OpManager, please visit http://www.manageengine.com/opmanager.

About ManageEngine

ManageEngine delivers the real-time IT management tools that empower an IT team to meet an organization's need for real-time services and support. Worldwide, more than 90,000 established and emerging enterprise customers - including more than 60 percent of the Fortune 500 - rely on ManageEngine products to ensure the optimal performance of their critical IT infrastructure, including networks, servers, applications, desktops and more. Another 300,000-plus admins optimize their IT using the free editions of ManageEngine products. ManageEngine is a division of Zoho Corp. with offices worldwide, including the United States, India, Japan and China. For more information, please visit http://buzz.manageengine.com/; follow the company blog athttp://blogs.manageengine.com/, on Facebook at http://www.facebook.com/ManageEngine and on Twitter at@ManageEngine.

Monday, March 17, 2014

Mengatur WiFi dan Mobility Anda



Managing Wi-Fi and mobility

We’re living in a mobile world. Smartphones and tablets are increasingly the predominant devices on our networks, moving traffic away from wired to wireless and changing the way we need to design and manage our networks. Things get even more complex when we have to factor in users’ desires to bring their own devices to work. How can we find a balance, and how can we simplify the increasingly complex management task?
Currently mobile devices and their networks are managed by a mix of different tools, all with their own user interfaces and idiosyncrasies. You are likely using a tool like Microsoft’s System Center Configuration Manager or its cloud service Intune to manage devices, with Windows Network Access Protection controlling access to network resources – while using proprietary tools to push configurations to network equipment. That is a complex mix of tools and technologies, and one that requires several different skill sets.
wifi.jpg
Modern wireless access points are powerful devices, capable of supporting large numbers of simultaneous high-speed connections to smartphones, tablets and laptops. That also means supporting a wide selection of different applications, with as wide a range of bandwidth requirements – from low bandwidth document access to delivering HD video streams to devices with 4K screens. That makes the wireless environment increasingly complex – and that is before we introduce users bringing in their own devices (if you’re supporting BYOD) or visitors expecting guest access.
Aerohive’s cloud-hosted Mobility Suite is a response to this growing complexity, bringing that mix of tools into one application. It starts with a client-management tool, which lets you distinguish between devices that are part of your corporate fleet, BYOD devices, and untrusted guest devices. Administrators get a one-stop shop for configuring policies and monitoring network usage, while employees get a self-service portal where they can register devices and manage their wireless access.
Guest devices are controlled via an ID manager tool that handles user authentication for different types of guest user – and delivers log-in credentials via SMS. There is also the option to use kiosks and web portals to register guest devices. If you’re using an existing MDM, there is also the option of using it with Mobility Suite, using it to push device agents and software.
There’s one issue with tools like this: they require using only one source of Wi-Fi access points. Aerohive’s solution depends on its HiveOS AP tools to manage access and devices from the cloud.
While that is not likely to be an issue for larger enterprises that standardise on suppliers quickly, it can be an issue for smaller businesses that may have a mix of Wi-Fihardware. Getting the right AP for your business is an important part of the purchasing process, as you’ll need to ensure you have the right technology for your needs.
Matthew Ghast from Aerohive talked us through some of the features of a modern AP, as the radio front end and antenna design are as important to delivering a successful network as the management tools.
“It’s all up to the infrastructure supplier design,” he said. “Some of it is the antenna, but a lot of it is the amplifier so it can feed a clean signal.”
While Aerohive’s cloud-management tools mix device and network management, Xirrus’s Mobilize is more of a network-management tool for its devices, delivering profiles to APs and helping design networks.
Getting wireless network design right is important in the transition from 802.11a/b/g/n to 802.11ac. Improved beam forming means newer wireless technologies can deliver the same coverage from fewer APs. However, that doesn’t mean networks are immune to capacity problems; something Xirrus’ Application Control tools are designed to help manage.
Like Aerohive, Xirrus is best known for its enterprise Wi-Fi solutions. First and foremost a hardware supplier, it also offers a range of network management tools and services that take advantage of its hardware capabilities.
A key component of its management tooling, Application Control, is intended to reduce the load on networks of BYOD and personal devices. Users have come to expect wireless networks will perform as well as wired. Unfortunately, even with fast 802.11ac networks that remains a problem – especially with high bandwidth HD video.
By using access points to inspect the packets they’re transmitting, it’s possible for Application Control to block or apply quality of service restrictions to unwanted apps.
Application Control takes a profile-based approach to policy, with profiles for more than 1,200 applications. You can use those to build appropriate policies, or just to track what’s in use, and how much bandwidth is being used in a central dashboard. If one specific app starts causing problems, or perhaps may not meet the requirements of an acceptable use policy, it can be throttled or blocked.
Putting deep-packet inspection on the edge of a network in an AP makes a lot of sense. It’s a low-impact way of distributing network management, reducing bottlenecks and putting wireless management where it belongs: with the wireless devices. 
Bandwidth-hungry apps can be blocked quickly, so your users updating their iPhones won’t stop your CRM system from giving the sales team customer contacts, or your ERP system from sending orders to suppliers. You can even route individual applications to specific VLANs, keeping user traffic and unapproved applications separate from your central business systems and services.
Both Aerohive’s and Xirrus’ tools go some way to unifying wireless network and device management, but they are still not the one stop shop a modern network needs. What you are getting with tools from network device vendors is just improved network management, and while that may reduce the number of tools you need to manage your wireless network, it’s not the panacea you might hope for.
If you are going to deliver a corporate app store, or deliver device monitoring agents, you are still going to need a MDM. These are also enterprise tools where large physical estates need to be covered by managed wireless networks, rather than tools for SMBs which rely on off-the-shelf hardware and built-in management tooling.
While it’s clear we’re not in a place where wireless network vendors can solve the management problem, it is not actually a bad thing. Those networking vendors are best at giving you the tools you need to run your network, whether it’s handling device registration like Aerohive or network usage like Xirrus.
Building full enterprise-grade system management tools that can determine the capabilities of different versions of Android or iOS is a distraction from delivering the fast, high capacity networks we expect, and it’s not what those wireless network vendors are delivering.
So if you’re waiting for a tool that helps you manage everything on a wireless network from one screen, sadly you’re going to have to wait a little longer.

Serangan dari IoT


According to Patrick Gray, the "invasion of things" is already underway. Make sure your organization is prepared to use the new technology to its advantage instead of struggling to catch up. 

Internet of Things

While the Internet of Things (IoT) should not be an unfamiliar term to most IT executives, many consider it primarily a consumer technology that's dominated by smart watches and refrigerators that will tweet when you’re out of milk. Many of the IoT concepts have indeed targeted the consumer, but most of the major technology innovations of the past several years have originated in the consumer space, and the IoT is no exception.

The "things" are coming, like it or not

One of the biggest impacts to the enterprise is that the number and variety of devices connecting to your networks and potentially consuming IT resources is likely to increase exponentially. Consider that the typical enterprise today has one or two devices per user. If that enterprise lacks a BYOD policy, those devices might be part of a dozen or so potential configurations powered by a handful of operating systems. As BYOD becomes more widespread, the per-user device count could jump to three or four devices, as personal laptops, tablets, and smartphones complement the corporate-issued laptop.
With the IoT, everything from watches to fitness monitors to intelligent office furniture comes into the mix. In the coming months, a single gadget-obsessed individual might walk through your lobby sporting more connected and communicating devices on his or her person than an entire department had a couple of years ago.
Traditional ideas around endpoint management, which assumed that an IT shop must track, manage, and patch every device that's connected to its network quickly, become untenable when a single employee might have over a half-dozen connected devices, each with a highly-customized OS and wildly different management capabilities. Furthermore, several of these devices may not even appear on your network, such as using a smartphone or computer to communicate with a cloud-based service of unknown providence.

Getting ready for the "things"

Just like non-sanctioned smartphones and tablets caught some organizations flat-footed, so will the onslaught of the IoT, unless your organization takes the time to develop a strategy and response, ideally before the problem manifests itself.
While a prohibition against non-sanctioned devices might be tempting, the experience of most IT leaders with the iPhone provides a clue as to the limited success of such an effort. All it takes to end the most well-intentioned device bans is a CEO who wants his or her new connected device to “just work” while in the office. Rather than a ban on non-work devices, consider a bandwidth-limited visitor/personal network that’s logically or physically separated from your core corporate network. This is an easy solution to the question of personal devices, but not every “thing” that’s likely to wind up coming through your doors will be a personal device.
Shows like CES are full of consumer devices, but the underlying technologies powering these devices -- lightweight operating systems, inexpensive hardware, and long battery life -- are equally applicable to the enterprise. While you may not have an influx of quadcopters, a connected forklift or “smart” pallet is highly likely as prices fall, due to consumer devices driving down prices. This means hundreds or even thousands of new devices connecting to your network, sending and receiving information, and requiring tools and infrastructure to analyze the data they generate.
If there’s not at least a line item in your future budgets -- and better yet, an initial technology strategy and lab where you’re testing the impact of the IoT on your business -- you’ll likely be playing a game of catch-up as your network strains to serve an exponential increase in devices, and your peers in the business will demand reports from tools you haven’t yet built. Like it or not, the “invasion of things” is currently underway. Like most technology shifts, the organizations that are prepared will use the new technology to their advantage, while the unprepared will struggle to catch up.

Privasi jadi hal penting dalam IoT



A new wave of smart devices sensors and Internet of Things collecting data will make it hard to remain anonymous offline. Will the public wake up to the risks all of that data poses to their privacy? 
 
privacy_istock_022414.jpg
 Image: iStock/maxkabakov
Should we do something just because we can? That simple question has bedeviled many leaders over the centuries, and has naturally arisen more often as the rate of technological change (e.g., chemical weapons, genetic engineering, drones, online viruses) has increased. In many cases, scientists and engineers have been drawn, as if by siren song, to create something that never existed because they had the power to do so.
Many great minds in the 20th century grappled with the consequences of these decisions. One example is theoretical physicist J. Robert Oppenheimer:
"When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you've had your technical success," he said, in aCongressional hearing in 1954. "That is the way it was with the atomic bomb."
In the decades since, with the subsequent development of thermonuclear warheads and intercontinental ballistic missiles and arms buildup during the Cold War, all of mankind has had to live with the reality that we now possessed the means to end life on Earth as we know it, a prospect that has spawned post-apocalyptic fiction and paranoia.
In 2014, the geostrategic imperative to develop the bomb ahead of the Nazis is no longer driving development. Instead, there are a host of decisions that may not hold existential meaning for life on Earth but instead how it is lived by the billions of humans on it.
This year, monkeys in China became the first primates to be born with genome editing. The technique used, CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats), has immense potential for use in genome surgeryleaping from lab to industry quickly. CRISPR could enable doctors to heal genetic disorders like sickle-cell anemia or more complex diseases in the future. Genome surgery is, unequivocally, an extraordinary advance in medicine. There will be great temptations in the future, however, for its application outside of disease.
Or take a technology that has become a lightning rod: Google Glass. Google banned facial recognition on Google Glass in the name of privacy, but included the feature in Google+ years before.
While Google turns facial recognition off by default, Facebook has it on and suggests people to tag when users upload photos, thereby increasing the likelihood that people will be identified. As always, the defaults matter: such tagging adds more data to Facebook's servers, including "shadow profiles" of people who may not have created accounts on the service but Facebook knows exists.
Over time, the increasing reach of both technology companies will make it harder than ever to be anonymous in public or formerly private spaces. Even if these two tech companies agreed not to integrate facial recognition by default into their platforms or tethered devices, what will the makers of future wearable computing devices or services choose? Government agencies face similar choices; in fact, the U.S. Customs and Border Patrol is considering scaling facial recognitions systems at the U.S. border.
Several news stories from the past week offer more examples of significant choices before society and their long-term impact, along with a lack of public engagement before their installation.
The New York Times reported that a new system of "smart lights" installed in Newark's Liberty International Airport are energy efficient and are also gathering data about the movements of the people the lights "observe." The lights are part of a wireless system that sends the data to software that can detect long lines or recognize licenses plates.
The story is an instructive data point. The costs of gathering, storing, and analyzing data through sensors and software are plunging, coupled with strong economic incentives to save energy costs and time. As The New York Times reported, such sensors are being integrated into infrastructure all around the world, under the rubric of "smart cities."
There are huge corporations (including Cisco, IBM, Siemens, and Philips) that stand to make billions installing and maintaining the hardware and software behind such systems, many of which I saw on display in Barcelona at the Smart Cities Expo years ago. A number of the wares' potential benefits are tangible, from lower air pollution through reduced traffic congestion to early detection of issues with water or sewage supplies or lower energy costs in buildings or streetlights.
Those economic imperatives will likely mean the questions that legislators, regulators, and citizens will increasingly grapple with will focus upon how such data is used and by whom, not whether it is collected in the first place, although parliaments and officials may decide to go further. "Dumbing down" systems once installed or removing them entirely will take significant legal and political action.
The simple existence of a system like that in the airport in Newark should be a clarion call to people around the country to think about what collecting that data means, and whether it's necessary. How should we weigh the societal costs of such collection against the benefits of efficiency?  
In an ideal world, communities will be given the opportunity to discuss whether installing "smart" streets, stoplights, parking meters, electric meters or garages--or other devices from the much larger Internet of Things--are in the public interest. It's unclear whether local or state governments in the United States or other countries will provide sufficient notice of their proposed installation to support such debate.
Unfortunately, that may leave residents to hope that watchdogs and the media will monitor and report upon such proposals. At the federal government level, there are sufficient resources to do so, as happened last week when The Washington Post reported that the Department of Homeland Security (DHS) was seeking a national license plate tracking system. After the subsequent furor, the DHS canceled the national license plate tracking plan, citing privacy concerns. Data collection that would support such a system may occur, with private firms arguing a First Amendment right to collect license plate data.
What will happen next on this count is unclear, at least to me. While the increasing use oflicense plate scanners has attracted the attention of the American Civil Liberties Union, Congress and the Supreme Court will have to ultimately guide their future use and application.
They'll also be faced with questions about the growing use of sensors and data analysis in the workplace, according to a well-reported article in the Financial Times. The article's author Hannah Kuchler wrote, "More than half of human resources departments around the world report an increase in the use of data analytics compared with three years ago, according to a recent survey by the Economist Intelligence Unit."
Such systems can monitor behavior, social dynamics, or movement around workspaces, like the Newark airport. All of that data will be discoverable; if email, web browsing history, and texts on a workplace mobile device can be logged and used in e-discovery, data gathered from sensors around the workplace may well be too.
There's reason to think that workplace data collection, at least, will gain some boundaries in the near future. A 2010 Supreme Court decision on sexting that upheld a 1987 decision that recognized the workplace privacy rights of government employees offers some insight.
"The message to government employers is that the courts will continue to scrutinize employers' actions for reasonableness, so supervisors have to be careful," said Jim Dempsey, the Center for Democracy and Technology's vice president for public policy, in an interview. "Unless a 'no privacy' policy is clear and consistently applied, an employer should assume that employees have a reasonable expectation of privacy and should proceed carefully, with a good reason and a narrow search, before examining employee emails, texts, or Internet usage."
Just as a consumer would do well to read the Terms and Conditions (ToC) for a given product or service, so too would a prospective employee be well-advised to read his or her employment agreement. The difference, unfortunately, is that in today's job market, a minority of people have the economic freedom to choose not to work at an organization that applies such monitoring.
If the read-rate for workplace contracts that includes data collection is anything like that for End User License Agreements (EULAs) or ToC, solely re-applying last century's "notice and consent" model won't be sufficient. Expecting consumers to read documents that are dozens of pages long on small mobile device screens may be overly optimistic. (The way people read online suggests that many visitors to this article never made it this far. Dear reader, I am glad that you are still with me!)
All too often, people treat any of the long EULAs, ToC, or privacy policies they encounter online as "TL;DR"--something to be instantly scrolled through and clicked, not carefully consumed. A 2012 study found that a consumer would need 250 hours (a month of 10-hour days) to read all of the privacy policies she encountered in a year. The answer to the question about whethermost consumers read the EULA, much less understand it, seems to be a pretty resounding "no." That means it will continue to fall to regulators and Congress to define the boundaries for data collection and usage in this rapidly expanding arena, as in other public spaces, and to suggest to the makers of apps and other digital services that pursuing broad principles of transparency, disclosure, usability, and "privacy by design" is the best route for consumers and businesses.
While some officials like FTC commissioner Julie Brill are grappling with big data and consumer privacy (PDF), the rapid changes in what's possible have once again outpaced the law. Until legislatures and regulators catch up, the public has little choice but to look to Google and Mark Zuckerberg's stance on data and privacy, the regulation of data brokers and telecommunications companies, and the willingness of industry and government entities to submit to some measure of algorithmic transparency and audits of data use.
There's hope in the near future that the public will be more actively engaged in discussing what data collection and analysis mean to society, either through upcoming public workshops on privacy and big data convened by the White House at MIT, NYU, and the University of California at Berkeley, but public officials at every level will need to do much better at engaging the consent of the governed. The signs from Newark and Chicago are not promising.

Data Center Tier Level dan hubungannya dgn uptime nya.



What data center Tier levels can and cannot tell you about uptime

Your enterprise's old data center has reached the end of the road, and the whole kit and caboodle is moving to a colocation provider. What should you be looking for in the data center, and just how much uptime comes from within?
A lot of the work measuring data center reliability has been done for you. The Uptime Institute's simple data center Tier levels describe what should be provided in terms of overall availability by the particular technical design of a facility.
There are four Uptime Tiers. Each Tier must meet or exceed the capabilities of the previous Tier. Tier I is the simplest and least highly available, and Tier IV is the most complex and most available.
Tier I: Single non-redundant power distribution paths serve IT equipment with non-redundant capacity components, leading to an availability target of 99.671% uptime. Capacity components include items such as uninterruptable power supply, cooling systems and auxiliary generators. Any capacity component failure will result in downtime for a Tier I data center, as will scheduled maintenance.
Tier II: A redundant site infrastructure with redundant capacity components leads to an availability target of 99.741% uptime. The failure of any capacity component can be manually operated by switching over to a redundant item with a short period of downtime, and scheduled maintenance still requires downtime.
Tier III: Multiple independent distribution paths serve IT equipment; there are at least dual power supplies for all IT equipment and the availability target is 99.982% uptime. Planned maintenance can be carried out without downtime. However, a capacity component failure still requires manual switching to a redundant component, which will result in downtime.
Tier IV: All cooling equipment is dual-powered and a completely fault-tolerant architecture leads to an availability target of 99.995% uptime. Planned maintenance and capacity component outages trigger automated switching to redundant components. Downtime should not occur.
In most cases, costs reflect Tiering -- Tier I should be the cheapest, and Tier IV should be the most expensive. But a well-implemented, well-run Tier III or IV facility could have costs that are comparable to a badly run lower-Tier facility.
Watch out for colocation vendors who say their facility is Tier III- or Tier IV-"compliant"; this is meaningless. Quocirca has even seen instances of facility owners saying they are Tier III+ or Tier 3.5. If they want to use the Tier nomenclature, then they should have become certified by the institute.

Your role in uptime

These Uptime Tier levels reflect availability targets for the facility -- not necessarily for the IT equipment inside. Organizations must ensure the architecture of the servers, storage and networking equipment, along with external network connectivity, provide similar or greater levels ofredundancy for the whole platform to meet the business' needs.
The uptime levels may seem close and precise, however, a Tier I facility will allow about 30 hours of downtime per annum, whereas a Tier IV facility will allow for less than 30 minutes.

LEARN MORE ABOUT COLOCATION

Colo or cloud?
Negotiate your colocation contract
The majority of Tier III and IV facilities have individual, internal targets of no unplanned downtime; discuss this when interviewing possible outsourcing providers or when designing your own facility.
Although it's tempting to look at the Uptime Tiers as a range of "worst-to-best" facilities, your business requirements must drive the need. Consider a sub-office with a central data center for most of its critical needs and a small on-site server room for non-critical workloads. A Tier III data center may be overly expensive for its needs, while a Tier I or Tier II facility would be highly cost-effective. Tier I and Tier II facilities are not generally suitable for mission-critical workloads, unless they must be used and a plan is in place to manage how the business works during downtime.
Ideally, house critical workloads in Tier III and IV data centers. Tier III facilities still require a solid set of procedures around capacity component failures, and these plans must be tested regularly. Even with Tier IV, don't assume everything will always go according to plan. Simple single-redundancy architecture (each capacity component backed up by one more) can still result in disruptions if more than one capacity component fails.
Ask how rapidly the data center operator replaces failed components to ensure redundancy is restored quickly. Are replacement components held in inventory, or is a supplier contracted to get a replacement on-site and installed within a set timeframe? For a Tier IV facility, this should be measured in hours, not days.
While the Uptime Institute's facility Tiers provide a good basis for what is required to create a data center facility with requisite levels of availability around the capacity components, the group will not provide reference designs -- areas such as raised vs. solid floors, in-row vs. hot/cold aisle cooling, and so on.

Smart Energy driven by IoT


This report provides overviews and critical statistical information on the electricity market, as well as detailed information on smart grid, smart meter projects and the key players in this market. It covers the areas where smart grids are going to play an important role such as in the developments in PV (Solar Energy) and smart cars as well as their implications on national infrastructure. Special chapters are dedicated to smart technologies for energy efficiency which depends on having the correct data (big data) from various sources analysed in real time for instant decision making processes. The report also discusses Machine-to-Machine (M2M) communication which is rapidly becoming a key element of smart grids.
Market Overview
Australian Is Right Up With The International Leaders
With a better understanding of the complexity of the transformation of the electricity industry, the words smart energy' are becoming more prominent. BuddeComm believes that the term smart grids' is too narrow and that eventually smart energy' will become the accepted terminology, especially once the communications developments in national broadband networks and mobile broadband start to converge with smart grid developments.
Smart energy signifies a system that is more integrated and scalable, which extends through the distribution system from businesses and homes, back to the sources of energy. A smarter energy system has sensors and controls embedded into its fabric. Because it is interconnected there is a two-way flow of information and energy across the network, including information on pricing. In addition it is intelligent, making use of proactive analytics and automation to transform data into insights and efficiently manage resources.
This links with the telecoms development known as M2M or the internet of things' (IOT). For this to happen, various functional areas within the energy ecosystem must be engaged - consumers, business customers, energy providers, regulators, the utility's own operations, smart meters, grid operations, work and asset management, communications, and the integration of distributed resources.
With energy consumption expected to grow worldwide by more than 40% over the next 25 years, demand in some parts of the world could exceed 100% in that time. This will produce an increase in competition for resources, resulting in higher costs. In an environment such as this, energy efficiency will become even more important.
Quite apart from any increased demand for energy in specific markets, the move to more sustainable technologies such as electric vehicles and distributed and renewable generation, will add even more complexity to operations within the energy sector.
Concerns about issues such as energy security, environmental sustainability, and economic competitiveness, are triggering a shift in energy policy, technology, and consumer focus. This, in turn, is making it necessary to move on from the traditional energy business models.
About Research and Markets
Research and Markets is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products.

Saturday, March 8, 2014

50 Free System and Network tools

Monday, February 24, 2014:  As a system/network administrator, you have the task cut out for yourself. It is your job to avoid any attacks on your system, protect it against attacks that do happen and maintain the system in general. To do this, you would be using some tools and the work is done best when you use the best possible tools. Here's a handy list!admin, network analysis, Wireshark, best network analysis tools, best admin tools, system analysis, system admin, network admin, free admin tools, top admin tools, best free admin tools

System and Network Analysis: As an administrator, it is your job to monitor the system and network that you are presiding over. Analysis is an important part of being in the know of what's happening and when a particular action is required. That is where system and network analysis tools come in handy.

1. NTFS Permissions Explorer

2. Xirrus Wi-Fi Inspector

3. Whois

4. ShareEnum

5. PipeList

6. TcpView

7. The Dude

8. Microsoft Baseline Security Analyzer

9. WireShark

10. Look@LAN

11. RogueScanner

12. Capsa Free Network Analyzer

13. SuperScan

14. Blast

15. UDPFlood

16. IPplan

17. NetStumbler

18. PingPlotter

19. SolarWinds Free Permissions Analyzer for AD

20. Angry IP Scanner

21. FreePortMonitor

22. WirelessNetView

23. BluetoothView

24. Vision

25. Attacker

26. Total Network Monitor

27. IIS Logfile Analyser

28. ntop

System testing and troubleshooting: What's the next logical step after analysis your network? Of course, testing whether your analysis was right or wrong. To put it more clearly, as a system or network admin, it is one of your jobs to perform tests on your domain. These are the tools that let you do that.

29. Pinkie

30. VMWare Player

31. Oracle VirtualBox

32. ADInsight

33. Process Monitor

34. SpiceWorks Network Troubleshooting

35. RAMMap

36. Autoruns

37. LogFusion

38. Microsoft Log Parser

39. AppCrashView

40. RootKitRevealer

System and network management: These are tools that allow you to manage the network or system. In a way, they comprise of various tools that help an IT professional to manage a bunch of tasks or certain specific tasks.

41. Bitcricket IP Subnet Calculator

42. EMCO Remote Installer Starter

43. ManagePC

44. Pandora FMS

45. SNARE Audit and EventLog Management

46. OCS Inventory

47. Zenoss Core – Enterprise IT Monitoring

48. Unipress Free Help Desk

49. SysAidIT Free Help Desk

50. Cyberx Password Generator Pro

This tools didn't suit your fancy? Want more? Click here.


Map Security needs to DevSecOps tools in SDLC.

  Map Security needs to DevSecOps tools in SDLC. Implementing DevSecOps effectively into the SDLC involves adopting the right tools, adaptin...