Monday, March 17, 2014

Privasi jadi hal penting dalam IoT



A new wave of smart devices sensors and Internet of Things collecting data will make it hard to remain anonymous offline. Will the public wake up to the risks all of that data poses to their privacy? 
 
privacy_istock_022414.jpg
 Image: iStock/maxkabakov
Should we do something just because we can? That simple question has bedeviled many leaders over the centuries, and has naturally arisen more often as the rate of technological change (e.g., chemical weapons, genetic engineering, drones, online viruses) has increased. In many cases, scientists and engineers have been drawn, as if by siren song, to create something that never existed because they had the power to do so.
Many great minds in the 20th century grappled with the consequences of these decisions. One example is theoretical physicist J. Robert Oppenheimer:
"When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you've had your technical success," he said, in aCongressional hearing in 1954. "That is the way it was with the atomic bomb."
In the decades since, with the subsequent development of thermonuclear warheads and intercontinental ballistic missiles and arms buildup during the Cold War, all of mankind has had to live with the reality that we now possessed the means to end life on Earth as we know it, a prospect that has spawned post-apocalyptic fiction and paranoia.
In 2014, the geostrategic imperative to develop the bomb ahead of the Nazis is no longer driving development. Instead, there are a host of decisions that may not hold existential meaning for life on Earth but instead how it is lived by the billions of humans on it.
This year, monkeys in China became the first primates to be born with genome editing. The technique used, CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats), has immense potential for use in genome surgeryleaping from lab to industry quickly. CRISPR could enable doctors to heal genetic disorders like sickle-cell anemia or more complex diseases in the future. Genome surgery is, unequivocally, an extraordinary advance in medicine. There will be great temptations in the future, however, for its application outside of disease.
Or take a technology that has become a lightning rod: Google Glass. Google banned facial recognition on Google Glass in the name of privacy, but included the feature in Google+ years before.
While Google turns facial recognition off by default, Facebook has it on and suggests people to tag when users upload photos, thereby increasing the likelihood that people will be identified. As always, the defaults matter: such tagging adds more data to Facebook's servers, including "shadow profiles" of people who may not have created accounts on the service but Facebook knows exists.
Over time, the increasing reach of both technology companies will make it harder than ever to be anonymous in public or formerly private spaces. Even if these two tech companies agreed not to integrate facial recognition by default into their platforms or tethered devices, what will the makers of future wearable computing devices or services choose? Government agencies face similar choices; in fact, the U.S. Customs and Border Patrol is considering scaling facial recognitions systems at the U.S. border.
Several news stories from the past week offer more examples of significant choices before society and their long-term impact, along with a lack of public engagement before their installation.
The New York Times reported that a new system of "smart lights" installed in Newark's Liberty International Airport are energy efficient and are also gathering data about the movements of the people the lights "observe." The lights are part of a wireless system that sends the data to software that can detect long lines or recognize licenses plates.
The story is an instructive data point. The costs of gathering, storing, and analyzing data through sensors and software are plunging, coupled with strong economic incentives to save energy costs and time. As The New York Times reported, such sensors are being integrated into infrastructure all around the world, under the rubric of "smart cities."
There are huge corporations (including Cisco, IBM, Siemens, and Philips) that stand to make billions installing and maintaining the hardware and software behind such systems, many of which I saw on display in Barcelona at the Smart Cities Expo years ago. A number of the wares' potential benefits are tangible, from lower air pollution through reduced traffic congestion to early detection of issues with water or sewage supplies or lower energy costs in buildings or streetlights.
Those economic imperatives will likely mean the questions that legislators, regulators, and citizens will increasingly grapple with will focus upon how such data is used and by whom, not whether it is collected in the first place, although parliaments and officials may decide to go further. "Dumbing down" systems once installed or removing them entirely will take significant legal and political action.
The simple existence of a system like that in the airport in Newark should be a clarion call to people around the country to think about what collecting that data means, and whether it's necessary. How should we weigh the societal costs of such collection against the benefits of efficiency?  
In an ideal world, communities will be given the opportunity to discuss whether installing "smart" streets, stoplights, parking meters, electric meters or garages--or other devices from the much larger Internet of Things--are in the public interest. It's unclear whether local or state governments in the United States or other countries will provide sufficient notice of their proposed installation to support such debate.
Unfortunately, that may leave residents to hope that watchdogs and the media will monitor and report upon such proposals. At the federal government level, there are sufficient resources to do so, as happened last week when The Washington Post reported that the Department of Homeland Security (DHS) was seeking a national license plate tracking system. After the subsequent furor, the DHS canceled the national license plate tracking plan, citing privacy concerns. Data collection that would support such a system may occur, with private firms arguing a First Amendment right to collect license plate data.
What will happen next on this count is unclear, at least to me. While the increasing use oflicense plate scanners has attracted the attention of the American Civil Liberties Union, Congress and the Supreme Court will have to ultimately guide their future use and application.
They'll also be faced with questions about the growing use of sensors and data analysis in the workplace, according to a well-reported article in the Financial Times. The article's author Hannah Kuchler wrote, "More than half of human resources departments around the world report an increase in the use of data analytics compared with three years ago, according to a recent survey by the Economist Intelligence Unit."
Such systems can monitor behavior, social dynamics, or movement around workspaces, like the Newark airport. All of that data will be discoverable; if email, web browsing history, and texts on a workplace mobile device can be logged and used in e-discovery, data gathered from sensors around the workplace may well be too.
There's reason to think that workplace data collection, at least, will gain some boundaries in the near future. A 2010 Supreme Court decision on sexting that upheld a 1987 decision that recognized the workplace privacy rights of government employees offers some insight.
"The message to government employers is that the courts will continue to scrutinize employers' actions for reasonableness, so supervisors have to be careful," said Jim Dempsey, the Center for Democracy and Technology's vice president for public policy, in an interview. "Unless a 'no privacy' policy is clear and consistently applied, an employer should assume that employees have a reasonable expectation of privacy and should proceed carefully, with a good reason and a narrow search, before examining employee emails, texts, or Internet usage."
Just as a consumer would do well to read the Terms and Conditions (ToC) for a given product or service, so too would a prospective employee be well-advised to read his or her employment agreement. The difference, unfortunately, is that in today's job market, a minority of people have the economic freedom to choose not to work at an organization that applies such monitoring.
If the read-rate for workplace contracts that includes data collection is anything like that for End User License Agreements (EULAs) or ToC, solely re-applying last century's "notice and consent" model won't be sufficient. Expecting consumers to read documents that are dozens of pages long on small mobile device screens may be overly optimistic. (The way people read online suggests that many visitors to this article never made it this far. Dear reader, I am glad that you are still with me!)
All too often, people treat any of the long EULAs, ToC, or privacy policies they encounter online as "TL;DR"--something to be instantly scrolled through and clicked, not carefully consumed. A 2012 study found that a consumer would need 250 hours (a month of 10-hour days) to read all of the privacy policies she encountered in a year. The answer to the question about whethermost consumers read the EULA, much less understand it, seems to be a pretty resounding "no." That means it will continue to fall to regulators and Congress to define the boundaries for data collection and usage in this rapidly expanding arena, as in other public spaces, and to suggest to the makers of apps and other digital services that pursuing broad principles of transparency, disclosure, usability, and "privacy by design" is the best route for consumers and businesses.
While some officials like FTC commissioner Julie Brill are grappling with big data and consumer privacy (PDF), the rapid changes in what's possible have once again outpaced the law. Until legislatures and regulators catch up, the public has little choice but to look to Google and Mark Zuckerberg's stance on data and privacy, the regulation of data brokers and telecommunications companies, and the willingness of industry and government entities to submit to some measure of algorithmic transparency and audits of data use.
There's hope in the near future that the public will be more actively engaged in discussing what data collection and analysis mean to society, either through upcoming public workshops on privacy and big data convened by the White House at MIT, NYU, and the University of California at Berkeley, but public officials at every level will need to do much better at engaging the consent of the governed. The signs from Newark and Chicago are not promising.

No comments:

Post a Comment

Map Security needs to DevSecOps tools in SDLC.

  Map Security needs to DevSecOps tools in SDLC. Implementing DevSecOps effectively into the SDLC involves adopting the right tools, adaptin...