Wednesday, February 29, 2012

New Google Privacy Policy and "Do Not Track"

Being that its been such a disastrous few weeks for Google in the privacy violation department I thought I'd go back to the topic of its new privacy rules as well as get into some of the important technicalities associated with Do Not Track protections in light of the President's proposed Privacy Bill of Rights.

First, let's go to reigning anti-privacy global champion Google, who is changing its privacy policies this week, placing 60 of its 70 existing product privacy policies under one blanket policy and breaking down the identity barriers between (to accommodate its new Google+ social network software) them as well. In other words, Google will combine data from all its services, so when users are signed in, Google may combine identity information users provided from one service with information from other services. The goal is to treat each user as one individual across all Google products, such as Gmail, Google Docs, YouTube and other Web services. You can read more about this policy in a recent post of mine.

Then we find out that Google has been bypassing the privacy settings in Apple's Safari browser. This is of particular concern and importance because that system, and those users, are specifically INTENDING that such monitoring be BLOCKED.

So that was the "Google" backdrop for a few other related stories. First, the President proposed a Consumer Privacy Bill of Rights that has some potential, though numerous pitfalls (I'll get to that later). And second, while Google has agreed to offer a kind of "Do Not Track" mechanism on Chrome, this didn't stop The Electronic Privacy Information Center (EPIC) from attempting to make Google obtain its users permission BEFORE sharing their private information as a result of its new privacy policy.

Unfortunately, U.S. District Judge Amy Berman Jackson said the court had no authority to force the FTC to keep Google in check. As detailed by Courthouse News, this isn't Google's first brush with the law: In June 2011, a federal judge approved an $8.5 million class action settlement brought by 31 million Gmail users who sued Google for exposing their personal information through its recently discontinued email feature, Google Buzz. In their lawsuit, users called the feature, which automatically shared their information with their email contacts, an "indiscriminate bludgeon" that could reveal the names of doctors' patients or lawyers' clients, or even the contacts of a gay person "who was struggling to come out of the closet and had contacted a gay support group."






The judge also made it clear that her ruling should not be taken as an endorsement of Google's privacy policies or her opinion on whether they violate the consent order.

So what does Google's new policy mean to you and what are some ways to better protect your privacy?

CNN.com suggests - in an article entitled "How to prepare for Google's privacy changes"- the following:

Don't sign in

This is the easiest and most effective tip.Many of Google's services -- most notably search, YouTube and Maps -- don't require you to sign in to use them. If you're not logged in, via Gmail or Google+, for example, Google doesn't know who you are and can't add data to your profile.

But to take a little more direct action ...

Removing your Google search history

Eva Galperin of the Electronic Frontier Foundation has compiled a step-by-step guide to deleting and disabling your Web History, which includes the searches you've done and sites you've visited.
It's pretty quick and easy:

-- Sign in to your Google account
-- Go to www.google.com/history
-- Click "Remove all Web History"
-- Click "OK"

As the EFF notes, deleting your history will not prevent Google from using the information internally. But it will limit the amount of time that it's fully accessible. After 18 months, the data will become anonymous again and won't be used as part of your profile.


Clearing your YouTube history

Similarly, users may want to remove their history on YouTube. That's also pretty quick and easy.
-- Sign in on Google's main page
-- Click on "YouTube" in the toolbar at the top of the page
-- On the right of the page, click your user name and select "Video Manager"
-- Click "History" on the left of the page and then "Clear Viewing History"
-- Refresh the page and then click "Pause Viewing History"
-- You can clear your searches on YouTube by going back and choosing "Clear Search History" and doing the same steps.


Interestingly, just as the White House pushes a privacy bill of rights its new online privacy legislation for Congress to consider, Google (in the wake of its privacy invasions) decided to get behind "Do Not Track," for Google Chrome. As Computerworld defines it, and how such a mechanism is eventually defined and operated is critical to its usefulness, "Do Not Track" is a "technology that relies on information in the HTTP header, part of the requests and responses sent and received by a browser as it communicates with a website, to signal that the user does not want to be tracked by online advertisers and sites.

In the browsers that now support the Do Not Track header, a user tells sites he or she does not want to be tracked by setting a single option. In Mozilla's Firefox, for instance, that's done through the Options (on Windows) or Preferences (Mac) pane by checking a box marked, "Tell web sites I do not want to be tracked."." That of course...just how well it does that and how is the million dollar question."

So what did Google just agree to by adding its support for Do Not Track to its Chrome browser? Computerworld has more:

So, when I tell my browser to send the Do Not Track request, no one will monitor my movements? 

Hold on there, pardner. Thursday's commitment by Google to support Do Not Track in Chrome may have been a clear win for the specific way that request is communicated, but there's no such clarity on what websites do -- or don't do -- when they receive that signal.

"On the technology side, this is an unambiguous win, but on the policy side there is still a lot of work to be done," Mayer said yesterday. The Electronic Frontier Foundation (EFF), an online privacy advocacy organization, said much the same. "While today was a great advancement on the Do Not Track technology, it did not meaningfully move the ball forward on the Do Not Track policy," said Rainey Reitman, the EFF's activism director, in a Thursday blog.

What have sites agreed to do with Do Not Track?  

They'll stop using cookies to craft targeted ads, the kind pointed at you based on your past surfing and other online behavior. 

But the companies that lined up Thursday to support Do Not Track -- the ad networks, websites and corporations who belong to the latest online ad industry trade group, the Digital Advertising Association (DAA) -- haven't promised to actually stop tracking users' Web movements. Instead, they've pledged to not use tracking data to serve targeted ads -- which the DAA calls "behavioral advertising" -- or use that tracking information "for the purpose of any adverse determination concerning employment, credit, health treatment or insurance eligibility, as well as specific protections for sensitive data concerning children." (IDG, the parent company of Computerworld, is a member of DAA, according to the association's list of participating companies and ad networks. Other media firms that will hew to the DAA's behavioral ad guidelines around Do No Track include Conde Nast, ESPN, Forbes and Time.)

What? So Do Not Track doesn't mean just that? 

Right, which is why privacy groups are pushing for a stricter interpretation. The EFF, for one, is leery of the advertising industry's sincerity.

"Historically, the DAA has eschewed providing users with powerful mechanisms for choices when it comes to online tracking," said EFF's Reitman. "The self-regulatory standards for behavioral advertising have offered consumers a way to opt out of viewing behaviorally targeted ads without actually stopping the online tracking, which is the root of the privacy concern."

Reitman worried that the DAA would mess with the simplicity of Do Not Track, and try to turn it into "slippery legalese that doesn't promise to do much of anything about tracking."

Anything else about the Do Not Track promises made by the advertising industry I should know? 

Yep, one interesting aspect: The DAA said it would not honor the setting if "any entity or software or technology provider other than the user exercises such a choice." EFF's Reitman interpreted that as a pre-emptive strike against browser makers that may want to turn on Do Not Track by default. (None do at this point.... It's off in Firefox, IE9 and Safari until the user manually changes the setting.)

Click here for more.

With that, let me take you to the New York Times article that delves deeply into the Do Not Track concept and where the battle lines will likely be drawn: separating those that want privacy, and more control over their own data, versus those that want to profit off violating that privacy, and selling that data.

The issue of digital privacy, especially how users’ data is collected online and then employed to show those users ads tailored to them, has been hotly debated for years. The announcements represent the attempt to satisfy consumer privacy concerns while not stifling the growth of online advertising, which is seen as the savior of media and publishing companies as well as the advertising industry. According to the Interactive Advertising Bureau, digital advertising revenues in the United States were $7.88 billion for the third quarter of 2011, a 22 percent increase over the same period in 2010. 

The industry’s compromise on a “Do Not Track” mechanism is one result of continuing negotiations among members of the Federal Trade Commission, which first called for such a mechanism in its initial privacy report; the Commerce Department; the White House; the Digital Advertising Alliance; and consumer privacy advocates. 

Until now, methods for opting out of custom advertising varied depending on the privacy settings of a user’s browser or whether a user clicked on the blue triangle icons in the corners of some digital ads. Under the new system, browser vendors will build an option into their browser settings that, when selected, will send a signal to companies collecting data that the user does not want to be tracked. 

The agreement covers all the advertising alliance’s members, including Google, Yahoo, AOL, Time Warner and NBCUniversal. 

Privacy advocates complain that the mechanism does not go nearly far enough in part because it affects only certain marketers. Many publishers and search engines, like Google, Amazon or The New York Times, are considered “first-party sites,” which means that the consumer goes to these Web pages directly. First-party sites can still collect data on visitors and serve them ads based on what is collected. 

...

Some consumer privacy advocates, while offering measured praise for the new privacy option, saw the move as an attempt to thwart a more restrictive stance on data collection. Jeffrey Chester, the head of the Center for Digital Democracy, which is pushing for more restrictions on data collection, called the move a win for the advertising lobby. 

In a statement, Mr. Chester said: “We cannot accept any ‘deal’ that doesn’t really protect consumers, and merely allows the data-profiling status quo to remain. Instead of negotiations, C.D.D. would have preferred the White House to introduce new legislation that clearly protected consumers online.” 

But advertisers have plenty to fear if consumers use Do Not Track in large numbers. “If there’s a high rate of opt-out, it’s an issue,” said George Pappachen, the chief privacy officer of the Kantar Group, the research and consultant unit of WPP. “Our position is data should flow,” Mr. Pappachen said, adding that data helps drive innovation and newer commercial models.

...

And there are still unresolved technical issues regarding Do Not Track, including what defines tracking and how that would apply to first-party and third-party Web sites. Over the last few months the World Wide Web Consortium, an international group that sets voluntary technical standards for the Web, has been working with representatives from companies like Microsoft, Google and Nielsen, along with academics, privacy advocates, legislators and digital advertising groups, to define the technical standard of Do Not Track. 

The consortium is also considering whether sites like Facebook, whose “like” button is used across multiple Web sites, would be considered first-party or third-party sites.“I do think you will see a lot of contention going forward about what Do Not Track means,” said Thomas Roessler, the technology and society domain leader at the consortium. 

Whether any companies should be allowed to collect data and follow users online, regardless of who they are, remains “the million-dollar question,” said Alex Fowler, the global privacy leader at Mozilla, the nonprofit organization that created the Firefox browser. Firefox was one of the first to include a Do Not Track option.“When you look at user testing, the expectation for the user for Do Not Track means, don’t behaviorally target me and also don’t collect information on me,” Mr. Fowler said. 


Stay tuned...

Monday, February 27, 2012

Obama Administration's Consumer Privacy Bill of Rights

By now most anyone that has come to this blog knows, at least in general terms, what is called behavioral targeting. This massive, growing multi-billion dollar industry is built on the tracking of you on the internet - and EVERYTHING you do on it...and then compiling, storing and selling that data to third party advertisers (while being accessed by government when requested...which we know is a lot)


This rise in behavioral tracking has made it possible for consumer information to be potentially misused, increases the threat of identity theft, and is a fundamental violation of privacy. Often times, such behavioral tracking is particularly targeted at vulnerable consumers for high-price loans, bogus health cures and other potentially harmful products and services. To date, to what extent "Do Not Track" rights exist, it has been a voluntary request from industry - which borders on pointless.

Now to my cautious optimism regarding the Obama Administration's announcement last week that it supported a Consumer Privacy Bill of Rights. The proposal lays out seven principles of privacy protections, including the right to exercise control over the dissemination of one’s data and the right to transparent privacy policies. The bill of rights is not legislation, acting more as a framework and statement of principles, but it does at least sound like the Administration "gets it" in a way we haven't heard before.

Consumers deserve the kinds of broad rights to protect their own information online the President is advocating - particularly that fundamental right to control how how personal data is used and that we deserve the right to avoid having our information collected and used for multiple unknown purposes. We also DESERVE the right to make sure our information is held securely, and not KEPT for long periods of time. And of course, we must have the right to hold those who are handling or misusing their personal data accountable when things go wrong.
To be sure, its an outline, and it still needs to make it through the legislative process (though the administration has threatened to bypass them...which is also a good sign) - meaning a GOP controlled House will have an opportunity to destroy, as it does with all public policy, anything it gets its hands on if it serves the profit motives of big business. 

Clearly, when you talk about companies like Google, Apple, Facebook and Microsoft...we're talking some big time heavy hitters with LARGE check books and hordes of high priced lobbyists. In other words, the devil will be in the details...and what will matter most might just be whether there are real, enforceable rules that punish these giants for breaking them.

But before I go into more of why tough legislation is needed - and privacy on the web is better protected, let's get to some of the details released.


Companies responsible for the delivery of nearly 90 percent of online behavioral advertisements — ads that appear on a user’s screen based on browsing and buying habits — have agreed to comply when consumers choose to control online tracking, the consortium said on Wednesday.

But even if a click of a mouse or a touch of a button can thwart Internet tracking devices, there is no guarantee that companies won’t still manage to gather data on Web behavior. Compliance is voluntary on the part of consumers, Internet advertisers and commerce sites.

"The real question is how much influence companies like Google, Microsoft, Yahoo and Facebook will have in their inevitable attempt to water down the rules that are implemented and render them essentially meaningless,” John M. Simpson, privacy project director for Consumer Watchdog, said in response to the administration’s plan. "A concern is that the administration’s privacy effort is being run out of the Commerce Department.”

It’s critical that government enact strong privacy regulations whose protections will remain with consumers as they interact on their home computer, cell phones, PDAs or even at the store down the street. Clear rules will help consumers understand how their information is used, obtained and tracked,” said Amina Fazlullah of U.S. Public Interest Research Group. “In the event of abuse of consumer information, this legislation could provide consumers a clear pathway for assistance from government agencies or redress in the courts.”

...

The new privacy outline brings together several efforts to develop and enforce privacy standards that have been progressing for the last couple of years on parallel tracks, under the direction of advertisers, Internet commerce sites and software companies.

The next step will be for the Commerce Department to gather Internet companies and consumer advocates to develop enforceable codes of conduct aligned with a “Consumer Privacy Bill of Rights” released as part of the administration’s plan on Wednesday. The bill of rights sets standards for the use of personal data, including individual control, transparency, security, access, accuracy and accountability. 


I'm a big supporter of limiting commercial tracking of our online activities, not just in the commercial sphere, but protecting it from government that increasingly demands this information from private companies.Similarly, there's a long, clear record that self regulation doesn't work - so creating rules and laws to protect people's privacy on the internet is critical, and now possible.

In principle, the proposal does look good...so what I'll be watching for is just how watered down this legislation becomes over time...and that we don't forget some of the key protections necessary, as recently outlined by a coalition of consumer groups, including: 

· Sensitive information should not be collected or used for behavioral tracking or targeting.
· No behavioral data should be collected or used from anyone under age 18 to the extent that age can be inferred.
· Web sites and ad networks shouldn’t be able to collect or use behavioral data for more than 24 hours without getting the individual’s affirmative consent.
· Behavioral data shouldn’t be used to unfairly discriminate against people or in any way that would affect an individual's credit, education, employment, insurance, or access to government benefits.

This would also include: No sensitive information (like health or financial information) should be used for behavioral tracking, no one under 18 should be behaviorally tracked, Web sites and ad networks shouldn’t be able to keep behavioral data for more than a day without getting an OK from the individual they’re tracking, and behavioral data can’t be used for discriminatory purposes.

Here are a couple responses from privacy advocates to the Administration's proposal worth noting here:

“The devil is going to be in the details,” acknowledges Paul Stephens, director of policy and advocacy for the nonprofit group Privacy Rights Clearinghouse. “It is a framework that certainly represents a decent start, but the key is going to be in three components,” he says, which include the legislation and regulations that grow out of it, and the enforcement thereof.

On paper, then, it looks fine as a work in progress, though Stephens does acknowledge that at least one provision – the “Respect for Context” clause, which says companies “will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data” – seems somewhat subjective and open for interpretation. As such, consumers concerned about their privacy will have to wait and see how this vague language of the bill of rights will translate into actionable regulation.

...

Anybody can stand behind some broad principles about respecting privacy rights,” Reitman says. “Whether it’s enforceable is still a far-off issue....even without legislation, the administration will convene multistakeholder processes that use these rights as a template for codes of conduct that are enforceable by the Federal Trade Commission.”
...

“The way it is right now … it’s historically been self-enforcing,” says Rainey Reitman, activism director for the digital rights advocacy group the Electronic Frontier Foundation. “The White House statement today changes that, so it will be under the umbrella of FTC enforcement.”


Ellen Bloom, a senior director of policy for Consumers Union, was at the press conference today in Washington, D.C., where the "Consumer Privacy Bill of Rights" was unveiled. She said consumers are very concerned about Internet companies passing along their private information to third parties. And she is happy that the administration is taking steps with the "Consumer Privacy Bill of Rights" to protect consumers. But she said the group will continue to educate and advocate to make sure privacy protections are strong enough to do the job.

"We are glad that the FTC and the advertising industry will breathe new life into the Do Not Track rules," she said. "This is a welcome first step toward providing a single simple tool to opt out of being tracked online. We are encouraged that we're on the right track. But we are not ready to rest."


More Backdrop on Behavioral Tracking
To get an even better understanding of why this matters, and what's happening to you and your information every time you get on the net check out this congressional testimony from a year or two ago from Jeff Chester of the Center for Digital Democracy...most of this is from the testimony and the groups press release...and it should clarify some of this obviously complicated issue.

“As with our financial system, privacy and consumer protection regulators have failed to keep abreast of developments in the area they are supposed to oversee,” he explained. “In order to ensure adequate trust in online marketing—an important and growing sector of our economy—Congress must enact sensible policies to protect consumers.”

“Whether using a search engine, watching an online video, creating content on a social network, receiving an email, or playing an interactive video game, we are being digitally shadowed online....Our travels through the digital media are being monitored, and digital dossiers on us are being created—and even bought and sold.” 

Singling out behavioral and “predictive” targeting for their violations of user privacy, Chester noted that the “consumer profiling and targeted advertising take place largely without our knowledge or consent, and affects such sensitive areas as financial transactions and health-related inquiries. Children and youth, among the most active users of the Internet and mobile devices, are especially at risk in this new media-marketing ecosystem.”

“Americans shouldn’t have to trade away their privacy and accept online profiling and tracking as the price they must pay in order to access the Internet and other digital media,” Chester declared, adding that far from being an impediment to continued growth in the online sector, meaningful privacy safeguards will actually stimulate the digital economy.

“The uncertainty over the loss of privacy and other consumer harms will continue to undermine confidence in the online advertising business,” he explained. “That’s why the online ad industry will actually greatly benefit from privacy regulation. Given a new regulatory regime protecting privacy, industry leaders and entrepreneurs will develop new forms of marketing services where data collection and profiling are done in an above-board, consumer-friendly fashion.”

Privacy is a fundamental right in the United States. For four decades, the foundation of U.S. privacy policies has been based on Fair Information Practices: collection limitation, data quality, purpose specification, use limitation, security safeguards, openness, individual participation, and accountability.

Those principles ensure that individuals are able to control their personal information
, help to protect human dignity, hold accountable organizations that collect personal data, promote good business practices, and limit the risk of identity theft. Developments in the digital age urgently require the application of Fair Information Practices to new business practices. Today, electronic information from consumers is collected, compiled, and sold; all done without reasonable safeguards.

Consumers are increasingly relying on the Internet and other digital services for a wide range of transactions and services, many of which involve their most sensitive affairs, including health, financial, and other personal matters. At the same time many companies are now engaging in behavioral advertising, which involves the surreptitious tracking and targeting of consumers.

Click by click, consumers’ online activities – the searches they make, the Web pages they visit, the content they view, the videos they watch and their other interactions on social networking sites, the content of emails they send and receive, how they spend money online, their physical locations using mobile Web devices, and other data – are logged into an expanding profile and analyzed in order to target them with more "relevant" advertising.

This is different from the "targeting" used in contextual advertising, in which ads are generated by a search that someone is conducting or a page the person is viewing at that moment. Behavioral tracking and targeting can combine a history of online activity across the Web with data derived offline to create even more detailed profiles. The data that is collected through behavioral tracking can, in some cases, reveal the identity of the person, but even when it does not, the tracking of individuals and the trade of personal or behavioral data raise many concerns.

Let's hope this Administration's actions match its words, that industry power won't weaken these principles beyond their usefulness, and of course, let's hope Congress is bypassed, as they serve NO PURPOSE (esp. the House) except to protect corporations and undermine people.

More to come....

Tuesday, February 21, 2012

Google Secretly Bypassing Safari Privacy Settings

The global tech giant Google, a company becoming an increasingly giant information control monopoly, has done it again. I speak of course of its long, sordid, and adversarial relationship with privacy, and this weekends news that it, and several other advertising companies have been bypassing the privacy settings in Apple's Safari browser. This is of particular concern and importance because that system, and those users, are specifically INTENDING that such monitoring to be BLOCKED.


Let's remember, it was just two weeks ago that a bit of a firestorm was sparked by Google changing its privacy policies rather abruptly, while making opt-ing out of the massive amount of data sharing that will take place if their proposed folding 60 of its 70 existing product privacy policies under one blanket policy and breaking down the identity barriers between (to accommodate its new Google+ social network software) nearly impossible.

Let us also remember that we know for instance, and they have been sued for it, companies like Google, Yahoo, Microsoft and other Internet companies track and profile users and then auction off ads targeted at individual consumers in the fractions of a second before a Web page loads.

We should also consider that sordid history of privacy and Google I mentioned at the start from Google Books  to the loss of "Locational Privacy" to the company's lobbying efforts in Congress, to its cloud computing, to its increasing usage and expansion of behavioral marketing techniques, to Google StreetView cars gathering private information from unaware local residents, to the company teaming with the National Security Agency (the agency responsible for such privacy violation greatest hits as warrantless wiretapping) "for technical assistance" to the infamous Google Buzz to the company's recent admittance that it gets THOUSANDS of requests from the government for information about its users to claims that the company manipulates its search results to favor its own products.

Before I get to some obvious solutions and responses to this latest Google controversy (like data retention limits and Do Not Track options), let me get to the story:

The Stanford study was written by Jonathan Mayer, a graduate student in law and computer science who has cranked out a growing body of headline-generating literature on online privacy. In his paper, he noted that unlike every other browser vendor, Apple's Safari automatically blocks tracking cookies generated by websites that users visit. Apple's Safari is one of the most popular browsers for mobile devices, and the default browser on Macs.


These cookies can collect information about where users go online and what they do - data that advertisers treasure.There are exceptions to Safari's cookie blocking, however. For instance, it allows what are known as "first-party cookies," those that sites like Facebook or Google drop onto devices so users don't have to sign in every time they visit. Certain carve-outs also allow Facebook users to "like" things on third-party sites.

Unlike Facebook, the problem for Google was that its social and ad networks run on different domains from its main one, Google.com. That prevents it from allowing a user of the Google+ social network to give a virtual thumbs up (or "+1") to an ad on another site, a step that makes such ads more valuable. That would require a "third-party cookie" that is blocked by Safari.


But it turns out there are a few ways for companies to get around these limitations. The one that Mayer's paper focused on involved inserting code to place tracking cookies within Safari. He found four companies doing this: Google, Vibrant Media, Media Innovation Group and PointRoll.


But in an interview, Mayer said Google had unilaterally decided that privacy permissions for its products superseded the privacy restrictions those users had enacted - implicitly or explicitly - by choosing to use Safari."The user is giving up some privacy in exchange for lining Google's pockets," he said.


Meanwhile, an even bigger problem occurred. Once Google tweaked the way Safari functions, other Google advertising cookies could be installed on the devices. "We didn't anticipate that this would happen, and we have now started removing these advertising cookies from Safari browsers," Whetstone said. Why this problem was spotted by a Stanford graduate student and not by a major corporation that's been under continual privacy scrutiny is a fair question that Google has yet to answer.

"I think that's a pretty big 'oops,' and it raises pretty big questions," Mayer said. Chris Hoofnagle, a digital privacy expert at UC Berkeley's law school, said there's a corporate tone-deafness within the engineering-centric culture of Google that leads to these sorts of mistakes. "To the engineer, cookie blocking appears to be a technical error that they should try to solve," he said. "It's very difficult for them to accept the frame that some people do not want this tracking."


Let me say, I don't know whether Google is telling the truth or not - my instincts say they aren't because of their track record on this issue. Regardless, this latest privacy breach, and violation of consumer desires and expectations proves yet again our regulations and rights have not caught up with technological advancements in the digital realm. 

As I have often written here, once again, an issue like this raises some particularly important questions: What kind of control should we have over our own data? And, what kind of tools should be available for us to protect it? What about ownership of our data? Should we be compensated for the billions of dollars being made bycorporations from their tracking of us? And of course, what of the government's access to this new world of data storage?

The argument from privacy advocates has largely been that this massive and stealth data collection apparatus threatens user privacy and regulators should compel (not hope that) companies to obtain express consent from consumers before serving up "behavioral" ads based on their online history.

More to the point is the simple, unavoidable fact that consumers should have MORE control, not less, over what information of ours is used, shared, and profited off.

Again, for first time, or rare readers of this blog, I would also point to the consistent dichotomy between the now proven HIGH LEVEL of concern about privacy on the internet among users with the fact they tend to do very little to actually protect it (which of course is related to how hard and complicated it can be to do so). Which in my mind, makes easy to use, clear options to protect privacy so paramount. Once people are given such a choice, not only will more people choose to "not be tracked", I think more people will become more AWARE of just how all pervasive such monitoring of nearly everything we do has become.

So, let's get to some OBVIOUS solutions to this growing online tracking problem, magnified by Google's latest violation of consumer trust. We CLEARLY have next to no privacy standards as related to these technological innovations and trends is disturbing, and more than enough of a reason for legislation like California's SB 761 (Do Not Track).

The Do Not Track flag is a rather simple concept that's already been built into Firefox and IE9. If users choose to turn on the option, every time they visit a web page the browser will send a message to the site, saying “do not track.”

SB 761 (Lowenthal) would offer consumers such a mechanism, something the bill's sponsor describe as "one of the most powerful tools available to protect consumers' privacy." The mechanism will allow anyone online to send Websites the message that they do not want their online activity monitored.

Obviously, this legislation happens to be something I'll be working on here in Sacramento this year - but a federal version would be most useful.

To be sure, there is no magic bullet when it comes to digital tracking protecting privacy. Another solution advocated by such privacy experts as Chris Hoofnagle, is data-retention limits. As he recently stated in an interview in the San Francisco Chronicle, "We know from behavioral economics that most people won't turn on do-not-track features, so if you're serious about protecting privacy, if you think there's a value here, you should protect it by default. It would require no user intervention. You would impair the ability of companies and law enforcement to create long-term profiles about people."

Similarly, as Hoofnagle points out, there are limitations to the "opt out" option alone, stating, "Under self-regulatory programs, they allowed people to opt out of targeted advertising if they wanted. But people figured out that what that meant is these companies could still track you, they just couldn't show you online behavioral advertising. They could still choose to target you in another channel (like direct-mail marketing or telemarketing.) And if you look at all the tracking they do, they can identify you in a fairly trivial way. Our study also found over 600 third-party hosts of cookies, most of which are not members of any self-regulatory organization (and thus aren't bound to the rules of opt-out programs). They're not even necessarily advertisers, they could be governments. We really don't know who they are."

The need for such consumer friendly and empowering solutions to this exploding data mining industry and tracking capabilities is clear because we KNOW marketers will stop at NOTHING to ensure they can monitor online behavior...so we can be better profiled by the government and marketed to by advertisers.

As a recent Berkeley study found, "Seven of the top 100 sites appeared to be using what's known as HTML5 local storage to back up standard cookies, and two were found to be respawning cookies....Third-party advertisers on the site were still employing the flash cookies, along with another type that takes advantage of the browser's cache, where online data is stored on the computer so it can be delivered faster. This ETag tracking allows advertisers to monitor users, even when they block all cookies and use a private browsing mode."

In other words, its time consumers regain control of our privacy and our personal information, through law, not through hope and polite requests to industries that don't care about you or your privacy - only their bottom line.

Tuesday, February 14, 2012

Domestic Spy Drones Approved by Congress

As if I planned it myself, just the day after I wrote a major blog (see the last one) about 7 privacy threats that the Constitution can't protect you from, Congress goes ahead and APPROVES two of them for widespread use. The two I speak of, as detailed by Alternet's Tana Ganeva, have to do with domestic spy drones. As I wrote at the time, apparently, these drones do more than just kill innocent women and children around the world, but in fact, are perfect domestic spying devices too.

As Ganeva also detailed, "An ACLU report from December says that local law enforcement officials are pushing for domestic use of the new technology, as are drone manufacturers. As Glenn Greenwald points out, drone makers "continuously emphasize to investors and others that a major source of business growth for their drone products will be domestic, non-military use."

Right now drones range in size from giant planes to hummingbird-sized, the ACLU report says, with the technology improving all the time. Some can be operated by only one officer, and others by no one at all. The report points to all the sophisticated surveillance technology that can take flight on a drone, including night vision, video analytics ("smart" surveillance that can track activities, and with improvements in biometrics, specific people), massive zoom, and the creepy see-through imaging, currently in development.

Similarly, there are also what are called "Super drones" that actually know who you are, because, as reported by Wired magazine, the military has given out research grants to several companies to spruce up these drones with technology that lets them identify and track people on the move, or "tagging, tracking, and locating" (TTL).

After writing about these disturbing possibilities, I then read these 3 stories, "Congress OKs FAA Bill allowing drones in US, GPS air traffic control", "Bill authorizes Use of Unmanned Drones in US Airspace", and "Drones over US get OK by Congress"

Let's go to the Chicago Tribune's report on this...this clip was found about halfway into the article:

The FAA is also required under the bill to provide military, commercial and privately-owned drones with expanded access to U.S. airspace currently reserved for manned aircraft by Sept. 30, 2015. That means permitting unmanned drones controlled by remote operators on the ground to fly in the same airspace as airliners, cargo planes, business jets and private aircraft.

Currently, the FAA restricts drone use primarily to segregated blocks of military airspace, border patrols and about 300 public agencies and their private partners. Those public agencies are mainly restricted to flying small unmanned aircraft at low altitudes away from airports and urban centers.

Within nine months of the bill's passage, the FAA is required to submit a plan on how to safely provide drones with expanded access.


Interestingly, not much more was said or discussed about these new rules and right in the article. So, let's go to the piece by the New American for more: 

Big Brother is set to adopt a new form of surveillance after a bill passed by Congress will require the Federal Aviation Administration (FAA) to open U.S. airspace to drone flights under a new four-year plan. The bill, which passed the House last week and received bipartisan approval in the Senate on Monday, will convert radar to an air traffic control system based on GPS technology, shifting the country to an age where satellites are central to air traffic control and unmanned drones glide freely throughout U.S. airspace.

By using GPS technology, congressional leaders argued, planes will land and take off more efficiently, as pilots will be able to pinpoint the locations of ground obstacles and nearby aircraft. The modernization procedures play into the FAA’s ambitious plan to achieve 50-percent growth in air traffic over the next 10 years. This legislation is "the best news that the airline industry ever had," applauded Sen. Jay Rockefeller (D-W.Va.). "It will take us into a new era."


...

Furthermore, privacy advocates worry that the bill will open the door to widespread use of drones for surveillance by law enforcement and, eventually, by the private sector. Some analysts predict that the commercial drone market in the U.S. could be worth hundreds of millions of dollars once the FAA authorizes their use, and that 30,000 drones could be flying domestically by 2020. "There are serious policy questions on the horizon about privacy and surveillance, by both government agencies and commercial entities," said Steven Aftergood, director of the Project on Government Secrecy at the Federation of American Scientists.

The Electronic Frontier Foundation, a digital rights advocacy and legal group, also is "concerned about the implications for surveillance by government agencies," affirmed attorney Jennifer Lynch, and it is "a huge push by lawmakers and the defense sector to expand the use of drones" in U.S. airspace.

"Congress — and to the extent possible, the FAA — need to impose some rules to protect Americans’ privacy from the inevitable invasions that this technology will otherwise lead to," wrote American Civil Liberties Union policy analyst Jay Stanley. "We don’t want to wonder, every time we step out our front door, whether some eye in the sky is watching our every move."


Now that I have your attention, let's get to the Washington Times (an admitted rag of a paper...but that doesn't mean they don't have anything of use to report):

Look! Up in the sky! Is it a bird? Is it a plane? It's ... a drone, and it's watching you. That's what privacy advocates fear from a bill Congress passed this week to make it easier for the government to fly unmanned spy planes in U.S. airspace.

....

Privacy advocates say the measure will lead to widespread use of drones for electronic surveillance by police agencies across the country and eventually by private companies as well.

"There are serious policy questions on the horizon about privacy and surveillance, by both government agencies and commercial entities," said Steven Aftergood, who heads the Project on Government Secrecy at the Federation of American Scientists.

....


The Electronic Frontier Foundation is suing the FAA to obtain records of the certifications. "We need a list so we can ask [each agency], 'What are your policies on drone use? How do you protect privacy? How do you ensure compliance with the Fourth Amendment?' " Ms. Lynch said.

"Currently, the only barrier to the routine use of drones for persistent surveillance are the procedural requirements imposed by the FAA for the issuance of certificates," said Amie Stepanovich, national security counsel for the Electronic Privacy Information Center, a research center in Washington.


Let's remember what I posted last week on this topic - before I knew Congress was about to legitimize all of it. As Noah Shachtman wrote: Perhaps the idea of spy drones already makes you nervous. Maybe you’re uncomfortable with the notion of an unblinking, robotic eye in the sky that can watch your every move. If so, you may want to click away now. Because if the Army has its way, drones won’t just be able to look at what you do. They’ll be able to recognize your face — and track you, based on how you look. If the military machines assemble enough information, they might just be able to peer into your heart.

One company claims it can equip drones with facial recognition technology that lets them build a 3-D model of a face based on a 2-D image, which would then allow the drone to ID someone, even in a crowd. 


They also say that if they can get a close enough look, they can tell twins apart and reveal not only individuals' identity but their social networks.

The Army also wants to identify potentially hostile behavior and intent, in order to uncover clandestine foes. Charles River Analytics is using its Army cash to build a so-called “Adversary Behavior Acquisition, Collection, Understanding, and Summarization (ABACUS)” tool. The system would integrate data from informants’ tips, drone footage, and captured phone calls. Then it would apply “a human behavior modeling and simulation engine” that would spit out “intent-based threat assessments of individuals and groups.” In other words: This software could potentially find out which people are most likely to harbor ill will toward the U.S. military or its objectives. Feeling nervous yet?


We're getting into truly Orwellian levels of surveillance that makes one ask, "just what in the hell are we so afraid of that we need to be monitored at all times?" We know that, study after study indicates we ARE NOT under a dangerous threat from terrorists, either from abroad or from within.We know that the chances of being killed by a terrorist are a fraction of the chance that you'll be hit by lightning.

Yet, here we are, rationalizing and legitimizing MASSIVE surveillance apparatuses that leave our privacy, and the Constitution, in tatters. What is the bigger threat here? A government, and in fact, a PRIVACY drone industry that can watch us anywhere, at all times, and even facially recognize us, for who knows what purposes (i.e. stifle dissent)....or, can we, as brave Americans simply take the TINY TINY risk that living in a world in which we're not constantly watched is acceptable? I hate to repeat myself so much on this blog, but, I also know how many readers are first time readers, so let me break this privacy versus security paradox down again.

In the final analysis, if we include in our definition of "safe" the concept of "safe" from government intrusiveness and corporate profiteering off fear peddling, I would argue these machines make us less secure, not more. So let’s scrap the meme that we should live in fear and that our constitutional rights must be sacrificed to address a threat the fraction of that posed by lightning, salmonella, and the health insurance industry.

The trend line is all too clear. More concerning than any single threat posed by any single technology – including drone surveillance – is this larger pattern indicating that privacy as both a right and an idea is under siege. The consequences of such a loss would be profound.

This false dichotomy between security and privacy must be directly confronted. As security and privacy expert Bruce Schneier once wrote, "If you set up the false dichotomy, of course people will choose security over privacy -- especially if you scare them first. But it's still a false dichotomy. There is no security without privacy. And liberty requires both security and privacy. The famous quote attributed to Benjamin Franklin reads: "Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety." It's also true that those who would give up privacy for security are likely to end up with neither.”

And let me sum this all up, once again, as I often do here.

Whether its the knowledge that everything we do on the internet is followed and stored, that we can be wiretapped for no reason and without a warrant or probable cause, that smart grid systems monitor our daily in home habits and actions, that our emails can be intercepted, that our naked bodies must be viewed at airports and stored, that our book purchases can be accessed (particularly if Google gets its way and everything goes electronic), that street corner cameras are watching our every move (and perhaps drones too), and that RFID tags and GPS technology allow for the tracking of clothes, cars, and phones (and the list goes on)...what is certain is privacy itself is on life support in this country...and without privacy there is no freedom. I also fear how such a surveillance society stifles dissent and discourages grassroots political/social activism that challenges government and corporate power...something that we desperately need more of in this country, not less.

But perhaps the GREAT Jim Hightower frames this attack on privacy the best when he writes, "Look, up in the sky! Neither a bird nor Superman, the next must-have toy for assorted police agencies is the unmanned aerial vehicle, better known as drones. Yes, the same miniaturized aircraft that lets the military wage war with a remote-controlled, error-prone death machine is headed to your sky, if the authorities have their way. Already, Homeland Security officials have deployed one to a Texas sheriff's office to demonstrate its crime-fighting efficacy, and federal aviation officials are presently proposing new airspace rules to help eager departments throughout the country get their drones.

But airspace problems are nothing compared to the as-yet-unaddressed Fourth Amendment problems that come with putting cheap, flying-surveillance cameras in the air. As usual, this techno-whiz gadget is being rationalized as nothing more than an enhanced eye on crime. But the drone doesn't just monitor a particular person or criminal activity, it can continuously spy on an entire city, with no warrant to restrict its inevitable invasion of innocent people's privacy. Drones will collect video images of identifiable people. Who will see that information? How will it be used? Will it be retained? By its nature, this is an invasive, all-encompassing spy eye that will tempt authorities to go on fishing expeditions. The biggest question is the one that is not even being asked: Who will watch the watchers?."



We would do well to - sooner rather than later - to recognize the inherent and fundamental value that privacy provides ANY claimed democracy. Without one there can not be the other..

Tuesday, February 7, 2012

Privacy Threats The Constitution Can't Protect You From

I just read yet another fascinating and disturbing article by Alternet's Tana Ganeva - someone I've sourced on this blog before. The article in question, which certainly connects to many of the issues I've written on here over the years, is entitled "7 Privacy Threats the Constitution Can't Protect You Against".

Now, let's go through each, and I'll mix in some of what I have written on these topics in the past (and others I've cited), along with what Tana does in her article.

Interestingly, she begins with the Supreme Court case recently decided regarding GPS tracking of suspects without a warrant - an issue I've covered here in detail for over a year now.  She and I see this case, and the very limited (though correct) decision made by the court in a similar fashion. As I wrote just a couple weeks ago in response to the decision, "The fourth amendment isn’t completely dead after all! While this fundamental right to privacy is admittedly in tatters, the Supreme Court ruled yesterday that police must have a warrant in order to track someone using a GPS device....Unfortunately, the government will likely continue to insist that tracking the location of cell phones is unaffected by this ruling.

Certainly, the stand out Justice was Sonia Sotomayor, who went much further than her colleagues on the issue of privacy in the digital age - even making a case for revision of the “third-party” doctrine (i.e. we lose Fourth Amendment protection when we disclose certain information). She wrote, “More fundamentally, it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties. This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks. People disclose the phone numbers that they dial or text to their cellular providers; the URLs that they visit and the e-mail addresses with which they correspond to their Internet service providers; and the books, groceries, and medications they purchase to online retailers.”

As you can see, my concern immediately went to the question of law enforcement tracking things like cell phones - which wasn't addressed in the decision (except implicitly by Sotomayor). This matters because
in 2009 Sprint received 8 million law enforcement requests for GPS location data in just one year. This is what we already KNOW, let alone what might be the full scope of the problem.

Tana Ganeva goes deeper, as the cell phone is only one way we can be tracked now, writing:

The Jones case itself presents an outdated problem, because police don't really have to bother with the clumsy task of sneaking a device onto a car; at this point, private companies have shoehorned location trackers in most "smart" gadgets. Justice Alito pointed out that the more than 332 million phones and wireless devices in use in the US contain technology that transmits the user's location. Many cars feature GPS as well, thanks to OnStar navigation. 


Location is just the start. There has probably not been a single week since 2005 without a story about Facebook, or Google, or Verizon, or AT&T terrifying consumers and privacy advocates with some new way to collect too much information and then share it with other companies or authorities. The problem is that the law does not adequately address private information that has been shared with third parties, like credit card companies or Google, Facebook and the telecoms, Tien says.

As Sotomayor put it, "I for one doubt that people would accept without complaint the warrantless disclosure to the Government of a list of every Web site they had visited in the last week, or month, or year. " 

Now let's get to Tana's second "threat", which she calls "Cameras everywhere: License plate readers, movement tracking on cameras." 

This too is something I have tackled here on this blog, writing about the ever expanding reach of video surveillance cameras. Certainly, polls are also not on my side, as large majorities of Americans seem generally fine with having every movement of their existence on tape, and watched by someone. Of course, we know that cameras DON'T in fact reduce crime and we also know that governments and law enforcement DO abuse our civil liberties when given such authority to monitor us. Those are two BIG strikes in my mind.

I'm still not convinced however, that this general support for such technological surveillance is a done deal, and the argument in favor of FEWER cameras in FEWER locations is a lost one. I believe this to be true for a couple reasons. One, most Americans have no concept of just how often they are being watched or worse, for what purposes. Two, few Americans have any idea the level of abuses such "watchers" are capable of...and if the Bush Administration has taught us anything its that we can't trust government when they are given more power than they know what to do with. My guess is we are just scratching the surface, on issues ranging from wiretapping to surveillance to monitoring, and when that surface is broken, public opinion might just change on this topic.

What people may also not fully comprehend is that advanced monitoring systems such as the one at the Statue of Liberty are proliferating around the country. State-of-the-art surveillance is increasingly being used in more every-day settings. By local police and businesses. In banks, schools and stores. There are an estimated 30 million surveillance cameras now deployed in the United States shooting 4 billion hours of footage a week. Americans are being watched, all of us, almost everywhere.
 
Now let's get to what Tana has to say on this:

Thanks in part to a decade of Homeland Security grants, America's cities are teeming with cameras -- they're on subways, on buses, on store fronts, in restaurants, in apartment complexes, and in schools.  In New York, the NYCLU found a five-fold increase in the number of security cameras in one area of New York between 1998 and 2005, and that was before the Bloomberg administration -- inspired by London, most heavily surveilled city in the world -- pledged to install 3,000 cameras in lower Manhattan as part of the Lower Manhattan Security Initiative (this plan was expanded to midtown Manhattan as well). The cameras, which stream footage to a centralized location, are equipped with video analytics that can alert police to "suspicious" activity like loitering. The NYPD, and municipalities all over the country and world also make generous use of license plate readers (LPR) that can track car movement. 

Tana's third example is of biometrics - another topic I've covered here in depth. A few months back I posted a pretty extensive blog on Facial Recognition technology and the threat it poses to individual privacy. As I've done in the past, because I know not everyone can read every post, I'll repeat a few of my thoughts here today: For some backdrop on biometrics, you can check out a past post I did about another article, also by Tana, entitled 5 Unexpected Places You Can Be Tracked With Facial Recognition Technology. As I wrote then, this issue has particular interest to me due to California's recent fight that we (Consumer Federation of California) were deeply involved in - whether biometric identifiers should be used by the DMV (we were able, with a host of other groups, to stop them).

As for the larger concern over facial recognition technology, groups from the Privacy Rights Clearinghouse (PRC) to the ACLU to the Electronic Frontier Foundation to EPIC have all been very active in making the case that there is a very real threat to privacy at stake in determining just how, and when, this technology can be used.

Again, going back to a prior post, I wrote: "First, let me refresh everyone on the concept of biometric identifiers - like fingerprints, facial, and/or iris scans.  These essentially match an individual’s personal characteristics against an image or database of images. Initially, the system captures a fingerprint, picture, or some other personal characteristic, and transforms it into a small computer file (often called a template). The next time someone interacts with the system, it creates another computer file.

Now, here's Tana on this privacy creep:
After 9/11 many cities and airports rushed to boost their camera surveillance with facial recognition software. The tech proved disappointing, and after testing that hit a paltry 60 percent accuracy rate in one case (that's pretty bad if you're trying to figure out identity), many programs were abandoned. In the years since then, both private companies and university research labs funded with government grants have made vast improvements in facial recognition and iris scans, like 3-D face capture and "skinprint" technology (mapping of facial skin patterns). Iris scans can allegedly tell identical twins apart.

Many private companies shill these products directly to local law enforcement agencies, a business strategy that police tend to be pretty enthusiastic about. One such success story is the MORIS device, a gadget attached to an iPhone that can run face recognition software, take digital fingerprints and grab an iris scan at a traffic stop. Starting last fall, the MORIS device has been in use in police departments all over the country. 

Tana's next example is that of ever expanding government databases and the incredible amounts of private data they are accumulating on us. In this instance, I'll go straight to the article, she writes:

Privacy advocates point out that novel types of biometric technology like facial recognition and iris scans can be an unreliable form of ID in the field, but that has not discouraged government agencies from embarking on grand plans to hugely expand their biometric databases. The FBI's billion-dollar "Next Generation Identification" system (NGI) will house iris scans, palm prints, measures of voice and gait, records of tattoos, and scars and photos searchable with facial recognition technology when it's complete in 2014. The bulk of this information is expected to come from local law enforcement. 

There are a number of reasons why such technological identifiers should concerns us. So let's be real clear, creating a database with millions of facial scans and thumbprints raises a host of surveillance, tracking and security question - never mind the cost. And as you might expect, such identifiers are being utilized by entities ranging from Facebook to the FBI. In fact, the ACLU of California is currently asking for information about law enforcements’ use of information gathered from facial recognition technology (as well as social networking sites, book providers, GPS tracking devices, automatic license plate readers, public video surveillance cameras)."

Next up on Tana's list of 7 privacy threats is a new one for me, called "FAST (Future Attribute Screening Technology)". She writes, "Then there's the tech that's supposed to peer inside your head. In 2008, the Department of Homeland security lab tested a program called Future Attribute Screening Technology (FAST), designed to thwart criminal activity by predicting "mal-intent." Unsavory plans are supposed to reveal themselves through physiological tells like heart rate, pheromones, electrodermal activity, and respiratory measurements, according to a 2008 privacy impact assessment. 

The 2008 privacy assessment, though, only addressed the initial laboratory testing of FAST's prophesying sensors on volunteers. According to a report in the journal Nature, sometime last year DHS also tested the technology in a large, undisclosed area in the northeastern US. 

Tana's 6th threat is none other than those mechanical war criminals called Drones! Apparently, they do more than just bomb innocent women and children around the world, but in fact, are perfect domestic spying devices too. She writes, "An ACLU report from December says that local law enforcement officials are pushing for domestic use of the new technology, as are drone manufacturers. As Glenn Greenwald points out, drone makers "continuously emphasize to investors and others that a major source of business growth for their drone products will be domestic, non-military use." 

Right now drones range in size from giant planes to hummingbird-sized, the ACLU report says, with the technology improving all the time. Some can be operated by only one officer, and others by no one at all. The report points to all the sophisticated surveillance technology that can take flight on a drone, including night vision, video analytics ("smart" surveillance that can track activities, and with improvements in biometrics, specific people), massive zoom, and the creepy see-through imaging, currently in development. 

And finally, Tana's 7th privacy threat are what she terms "Super drones that know who you are!" She goes on to explain, writing:

In September, Wired reported that the military has given out research grants to several companies to spruce up their drones with technology that lets them identify and track people on the move, or "tagging, tracking, and locating" (TTL). Noah Shachtman writes:

Perhaps the idea of spy drones already makes you nervous. Maybe you’re uncomfortable with the notion of an unblinking, robotic eye in the sky that can watch your every move. If so, you may want to click away now. Because if the Army has its way, drones won’t just be able to look at what you do. They’ll be able to recognize your face — and track you, based on how you look. If the military machines assemble enough information, they might just be able to peer into your heart.

One company claims it can equip drones with facial recognition technology that lets them build a 3-D model of a face based on a 2-D image, which would then allow the drone to ID someone, even in a crowd. They also say that if they can get a close enough look, they can tell twins apart and reveal not only individuals' identity but their social networks, reports Wired. That's not all. Shachtman continues: 

The Army also wants to identify potentially hostile behavior and intent, in order to uncover clandestine foes. Charles River Analytics is using its Army cash to build a so-called “Adversary Behavior Acquisition, Collection, Understanding, and Summarization (ABACUS)” tool. The system would integrate data from informants’ tips, drone footage, and captured phone calls. Then it would apply “a human behavior modeling and simulation engine” that would spit out “intent-based threat assessments of individuals and groups.” In other words: This software could potentially find out which people are most likely to harbor ill will toward the U.S. military or its objectives. Feeling nervous yet?

To answer that final question, yes...I do feel nervous. I've written a lot on this blog about what it means to live in a society without ANY privacy. As I have said, such a society, one we are rapidly approaching, has ramifications that go far deeper than simply "being watched" or feeling uneasy. What we are talking about is freedom itself...and the way such an all seeing surveillance state stifles dissent and dis-empowers citizens.

As I have written here before, "Whether its the knowledge that everything we do on the internet is followed and stored, that we can be wiretapped for no reason and without a warrant or probable cause, that smart grid systems monitor our daily in home habits and actions, that our emails can be intercepted, that our naked bodies must be viewed at airports and stored, that our book purchases can be accessed (particularly if Google gets its way and everything goes electronic), that street corner cameras are watching our every move (and perhaps drones too), and that RFID tags and GPS technology allow for the tracking of clothes, cars, and phones (and the list goes on)...what is certain is privacy itself is on life support in this country...and without privacy there is no freedom. I also fear how such a surveillance society stifles dissent and discourages grassroots political/social activism that challenges government and corporate power...something that we desperately need more of in this country, not less."

As Bruce Schneier, a security and privacy expert once wrote, "...lack of privacy shifts power from people to businesses or governments that control their information. If you give an individual privacy, he gets more power…laws protecting digital data that is routinely gathered about people are needed. The only lever that works is the legal lever...Privacy is a basic human need…The real choice then is liberty versus control.”

We would do well to - sooner rather than later - to recognize the inherent and fundamental value that privacy provides ANY claimed democracy. Without one there can not be the other...