Since this has become such a big issue in California for privacy advocates in recent weeks (see my past posts on the subject), I think I'll continue to post expert opinions on the pro's and con's of biometrics. This will be particularly important to understand being that the Legislature will be debating whether to include facial and thumb print scans in all California licenses in the coming months.
There's no real consensus yet among privacy advocates as to whether ANY additional biometrics in licenses is acceptable, or whether, if implemented with complete transparency and ironclad privacy protections, some middle ground could be agreed upon. My sense is facial scans are simply unwarranted and intrusive. Period.
In fact, just last week a group of Vietnamese researchers showed that facial scans might not be such a good idea...cracking the system using multiple means. The simplest way was to use a picture of the person to spoof the web cam into thinking it was the user. Given the ready availability of images on sites like MySpace and Facebook, this seems to be an easy route to access.
The researchers also showed that they could use a brute force attack by generating multiple random fake faces to eventually gain access. States Profesor Duc in his paper on the hack, "The mechanisms used by those three vendors haven't met the security requirements needed by an authentication system, and they cannot wholly protect their users from being tampered."
With all that said, here's what Toby Stevens from Computer Weekly had to say:
I'm personally not too concerned about the application of biometric technologies in appropriate situations. What worries me are the processes and broader IT systems that depend on those technologies. Biometrics occasionally throw up false acceptances or false rejections. The problem is that the the systems and officials that depend on those biometrics, and the databases of personal information to which they are linked, place too much dependence on them and then make ridiculous decisions as a result. The attitude of "there's a biometric involved so it must be correct" is very dangerous indeed - ask the people who have suffered wrongful arrest, rendition and torture as a result of stupid decisions made on the back of biometric system errors (more on this in a forthcoming blog article).
The problem is that all too often the organisations implementing biometric systems have failed to be transparent about the purpose or operation of the system, and this has reinforced mistrust of the technologies. School implementations are once again an example, since local authorities have often refused to discuss details of their fingerprinting approaches, or even to seek valid consent to that use of personal information, believing it to be covered by statutory processing permissions.
Should we gather biometric templates or biometric images? The most complex and expensive part of a biometric scheme is enrolment of the data subjects into the system. Algorithms and technologies are developing quickly, and to protect the investment it is tempting to capture images (a high-quality scan of the biometric, eg a digital photo or high-quality voice recording) so that templates (mathematical products derived from that image, which can be used to confirm a biometric but cannot be used to recover the original image) can be regenerated when needed. However, 9 times out of 10 organisations go for the image option since they believe that this will future-proof their investment. Templates have fewer privacy implications than images, since a stolen image can (in theory) be used to assist in attacks on the user's identity, whilst the template is of far less use. Moreover, once a biometric image has been stolen and used for fraud, it can't be revoked - you can't change your fingerprints!
Not surprisingly, the answers to our key questions can be derived quickly, easily and with a minimum of cost. Every biometric application should have a Privacy Impact Assessment (PIA) as part of its business case, completed before any procurement or development commences. The PIA should consider whether biometric technologies are a proportionate and acceptable solution to the problem in hand; whether the application should seek to identify or authenticate the users; and if so, whether it is really necessary to capture an image at time of enrolment, or will a template alone deliver the necessary functions.
In many ways it was precisely the lack of this kind of analysis, rigorous study and ironclad safeguards demonstrated by the DMV last month that led our privacy coalition to take such aggressive action. Surely this issue, and technology, can be dealt with in a more transparent and meticulous way in the coming months.
Thursday, February 26, 2009
Since this has become such a big issue in California for privacy advocates in recent weeks (see my past posts on the subject), I think I'll continue to post expert opinions on the pro's and con's of biometrics. This will be particularly important to understand being that the Legislature will be debating whether to include facial and thumb print scans in all California licenses in the coming months.
Tuesday, February 24, 2009
AS I discussed last Friday, the Federal Trade Commission finally released it's initial - and long awaited - guidelines designed to ensure the privacy of people whose online information is gathered by marketers. The consensus among consumer and privacy rights groups generally was they simply didn't go far enough and left everything up to voluntary compliance (with the threat of later enforcement).
Before I get to the choice of Liebowitz as the new chairman, which all are in agreement is a critical position for the future of privacy, here's a couple brief passages from the Business Week article detailing the FTC proposal from last week:
Guidelines may be a step in the direction of protecting privacy, but consumer advocacy groups say the government needs to pass legislation that regulates behavioral targeting practices. The FTC "should have told Congress it's time to act and create legislative safeguards," says Jeffrey Chester, founder and executive director of the Washington-based Center for Digital Democracy. The CDD and other consumer groups also say the FTC doesn't provide sufficient guidance in areas such as the definition of sensitive data.
The FTC "encourages industry, consumer, and privacy advocates, and other stakeholders to develop more specific standards to address" sensitive data. But, says Pam Dixon, executive director of the World Privacy Forum, "the industry already has developed [definitions for sensitive data] but they are absolutely inadequate." Protecting health-related information is especially important as the government's proposed stimulus legislation earmarks funds for the management of online health records, Dixon says.
Now, it's up to the Administration of President Barack Obama to appoint a chairman to head the commission, one of the few agencies of its stature that still lacks a head.
Well, now we've got Obama's choice, and the responsibility falls on him to add teeth to these largely toothless guidelines. The apparent good news is Leibowitz has a solid record and is ready to regulate - and was unsatisfied with the "voluntary" approach to corporate compliance taken during the Bush years.
Liberal groups including the ACLU and U.S. PIRG last year called on the Obama administration to appoint a chairman who would take a more regulatory approach. More recently, many of those same groups criticized the FTC's view that self-regulation of online targeted advertising was sufficient, which Leibowitz also seemed to take issue with.
In November 2007, Leibowitz suggested that Internet companies should take an "opt in" approach to cookies instead of the current "opt out" approach, a requirement that would have roiled the industry. He also suggested the idea of a "Do Not Track" list for Web surfers.
"Leibowitz will help transform what has been a largely anemic regulatory watchdog during the Bush years into an agency that sees its first priority as consumer protection," said Jeff Chester, executive director of the Center for Digital Democracy, a liberal group that advocates for more regulation. "Public interest groups such as mine appreciate that Leibowitz has called for tougher online privacy safeguards, and that his door has always been open."
On the issue of Net neutrality, Leibowitz stood out from his colleagues in June 2007 when the FTC released a report stating no new laws were necessary. Leibowitz issued an opinion saying existing antitrust laws may not have been "adequate to the task" of Internet broadband regulation.
"Will carriers block, slow or interfere with applications?" Leibowitz asked at a public hearing held by the FTC in November 2006. "If so, will consumers be told about this before they sign up? In my mind, failure to disclose these procedures would be...unfair and deceptive."
All in all, this appears to be an excellent selection! I'm especially excited by his support for the concept of opt-in over opt-out when it comes to Internet companies and cookies, as well as his clear and unequivocal support for Net Neutrality. Keeping the Internet free and democratic is one of THE most important telecommunication issues facing our country.
Friday, February 20, 2009
It's WAY too late in the day - and week - to come across an article like this! I don't know if its just me, but this goes way over the line in my mind. So, just to be safe, you may want to sit down and strap yourself in for this one.
Yesterday, a group of Republican lawmakers called for "a sweeping new federal law that would require all Internet providers and operators of millions of Wi-Fi access points, even hotels, local coffee shops, and home users, to keep records about users for two years to aid police investigations."
I'm going to refrain from my usual privacy defending and Constitution hailing rants today. Let's just go to the article in CNET.
Declan McCullagh, CNET News' chief political correspondent reports:
The legislation...would impose unprecedented data retention requirements on a broad swath of Internet access providers and is certain to draw fire from businesses and privacy advocates.
Two bills have been introduced so far--S.436 in the Senate and H.R.1076 in the House. Each of the companion bills is titled "Internet Stopping Adults Facilitating the Exploitation of Today's Youth Act," or Internet Safety Act.
Each contains the same language: "A provider of an electronic communication service or remote computing service shall retain for a period of at least two years all records or other information pertaining to the identity of a user of a temporarily assigned network address the service assigns to that user."
Translated, the Internet Safety Act applies not just to AT&T, Comcast, Verizon, and so on--but also to the tens of millions of homes with Wi-Fi access points or wired routers that use the standard method of dynamically assigning temporary addresses. (That method is called Dynamic Host Configuration Protocol, or DHCP.)
The legal definition of electronic communication service is "any service which provides to users thereof the ability to send or receive wire or electronic communications." The U.S. Justice Department's position is that any service "that provides others with means of communicating electronically" qualifies.
That sweeps in not just public Wi-Fi access points, but password-protected ones too, and applies to individuals, small businesses, large corporations, libraries, schools, universities, and even government agencies. Voice over IP services may be covered too.
Under the Internet Safety Act, all of those would have to keep logs for at least two years. It "covers every employer that uses DHCP for its network," Gidari said. "It covers Aircell on airplanes--those little pi co cells will have to store a lot of data for those in-the-air Internet users."
I'm a little startled by these bills, and feel the need to do some more research and snooping around before I comment too much. I'll be back with more on this issue...and certainly will follow the progress of these two proposals...they reek of Big Brother to me...
Click here to read more.
Thursday, February 19, 2009
I hate to say "I told you so", but in light of the past two week battle we privacy advocates have had in order to stop the California DMV from implementing a massive biometrics program, I'm going to...so, I told you so. There, I said it :)
In all seriousness though, this is PRECISELY why we forced this issue into the mainstream and demanded it be debated by the California legislature, with public hearings, BEFORE every Californians license would be forced to include both facial and thumb print biometrics.
As if on que, I found this story today in a number of publications about just how easy it was for hackers to make short work of "super-secure" facial biometrics. Gee, that's funny, all us privacy advocates were JUST being accused of being paranoid Luddites, and now we see there IS good reason to take these things slow.
Check some of my previous posts for the back story on our big biometrics fight here in California. You can also click here to check an article we wrote up on our website.
Daily Tech reports:
The problem with any hot technology in the security world is that the desire to raise a product above the competition seems to invariably lead to boastful claims. Such claims make the technology a high profile target for hackers, and with the bright minds in the field, it takes little time to take many supposedly "unbeatable" countermeasures down. Thus was the case with RFID, recently shown to be extremely insecure, and now it appears that at least some types of biometrics are headed down the same path.
The Vietnamese researchers showed that the tech might not be such a good idea, though, by using multiple means to crack it. The simplest way was to simply use a picture of the person to spoof the webcam into thinking it was the user. Given the ready availability of images on sites like MySpace and Facebook, this seems to be an easy route to access.
The researchers also showed that they could use a brute force attack generating multiple random fake faces to eventually gain access, for lack of a picture to use the easier route. States Profesor Duc in his paper on the hack, "The mechanisms used by those three vendors haven't met the security requirements needed by an authentication system, and they cannot wholly protect their users from being tampered."
He continues, "There is no way to fix this vulnerability. ASUS, Lenovo, and Toshiba have to remove this function from all the models of their laptops ... [they] must give an advisory to users all over the world: Stop using this [biometric] function."
So, photoshop has defeated biometrics! My only point is before we jump head first into the brave new world of biometric systems - which have been touted as the next big thing in computer security - we might want to take notice of the fact that some of them—fingerprint scanners, and now facial ones - have proven to be incredibly easy to bypass.
By the least, it appears they need a little more time, certainly more than we've all been led to believe. One day, perhaps it will be the security scanner of choice, for now, I think the DMV can make due with the old fashioned way...and somehow I think we'll all be just fine.
Tuesday, February 17, 2009
Now this is what I call high political drama (and a big victory for privacy advocates)!
With one day to spare, the Joint Legislative Budget Committee (JLBC) stepped in to reject the DMV’s proposal to impose sweeping new biometric technologies - such as facial and thumb print scans - as elements in a renewal of a vendor contract to produce driver’s licenses and ID cards.
The Consumer Federation of California had joined organizations from across the political spectrum – including the ACLU, Electronic Frontier Foundation, California Eagle Forum, Consumers Union, Privacy Activism, Privacy Rights Clearinghouse, and the World Privacy Forum - to urge the legislature to reject the DMV's request on the grounds that any change of this magnitude should be a policy matter for the legislature to decide, after considering whether it is effective, affordable, and if it contains the appropriate privacy safeguards.
If the JLBC did not act in time the proposal would have moved forward. Thankfully, at the very last moment a letter that was unequivocal in its opposition to the proposal was sent to the DMV from Senator Denise Ducheny - the Committee Chair.
Click here to read the complete letter.
Here's a particularly important passage:
"Of particular concern is the proposed use of biometric technology as part of the card issuance process and the related privacy issues. I think the Legislature should consider the policy implications of using biometrics in the issuance of driver licenses before the department starts to use the technology. In addition, after review and discussions with DMV, the Analyst concluded that the request was not fully justified, in part because the department was unable to provide key information on the specific costs and benefits related to the proposed use of biometrics."
For a backdrop on all that transpired in the last week, and a more detailed explanation of biometrics technology and how it poses a threat to privacy if not properly implemented, here's our (CFC) most recent article on the subject.
I'd also like to thank Edwin Garcia of the San Jose Mercury News for covering this slick attempt by the DMV to circumvent the democratic process. He covered the initial story in an article last week, and did so again once the verdict came down from the JLBC on Thursday.
A key legislative committee has blocked the DMV's request to fast-track a new technology that the agency is seeking to deter identity theft, scoring a victory for privacy-rights groups. The Department of Motor Vehicles recently proposed a $63 million contract with a company that uses facial-recognition software, which can detect whether a person photographed for a new driver's license already has a license. The software allows the DMV to match a photograph with the entire DMV database of driver's license pictures.
But privacy groups strongly objected, fearing police could borrow the DMV's biometric technology to monitor people at public gatherings. Privacy groups said police would be able to photograph "innocent people" and scan their picture into the software, then match it with the database, which in turn could reveal a person's name and address.
The DMV sought permission from Gov. Arnold Schwarzenegger to sign the contract as early as this week, without the scrutiny of public hearings. Privacy advocates and Sen. Joe Simitian, D-Palo Alto, objected.
On Thursday, the Joint Legislative Budget Committee used its power to block the DMV from fast-tracking the contract.
"I think the Legislature should consider the policy implications of using biometrics in the issuance of driver's licenses before the department starts to use the technology," wrote committee Chairwoman Denise Moreno Ducheny in a letter to the Administration. As a result of the letter, the DMV's request for the five-year contract will undergo public hearings.
Granted, we have only temporarily averted this power grab by the Administration and the DMV. Still, its quite an accomplishment - and a victory - to first discover that this proposal was even in the works, and then stop it in its tracks in the two week window that we were afforded. Now comes the bigger fight...for the public to see and the legislature to decide. I will keep you all up to speed as this debate transpires.
Friday, February 13, 2009
The Federal Trade Commission has finally released it's initial - and long awaited - guidelines designed to ensure the privacy of people whose online information is gathered by marketers. The consensus among consumer and privacy rights groups is the rules don't go nearly far enough.
Business Week reports:
Among the recommendations: Every site that follows Web-use patterns to tailor marketing messages, a practice known as behavioral targeting, should spell out how it is collecting data and give consumers the ability to opt out of targeting.
The report also urges sites to keep collected data "as long as is necessary to fulfill a legitimate business or law enforcement need," inform users of any changes made to privacy policies, and only collect sensitive personal data—such as financial and health records—in cases where the user opts in.
Guidelines may be a step in the direction of protecting privacy, but consumer advocacy groups say the government needs to pass legislation that regulates behavioral targeting practices. The FTC "should have told Congress it's time to act and create legislative safeguards," says Jeffrey Chester, founder and executive director of the Washington-based Center for Digital Democracy.
The CDD and other consumer groups also say the FTC doesn't provide sufficient guidance in areas such as the definition of sensitive data. The FTC "encourages industry, consumer, and privacy advocates, and other stakeholders to develop more specific standards to address" sensitive data. But, says Pam Dixon, executive director of the World Privacy Forum, "the industry already has developed [definitions for sensitive data] but they are absolutely inadequate." Protecting health-related information is especially important as the government's proposed stimulus legislation earmarks funds for the management of online health records, Dixon says.
Now, it's up to the Administration of President Barack Obama to appoint a chairman to head the commission, one of the few agencies of its stature that still lacks a head. According to online privacy experts, current FTC Commissioners Leibowitz and Pamela Jones Harbour are top contenders for the job. If the commissioners' warnings hold credence, stricter government regulation of behavioral targeting could be on the way. Says Leibowitz: "We're going to stay very involved in this area. We're all hopeful that self-regulation will work, but some of us are more skeptical."
It appears the FTC is using that good ole' Bank Bailout (TARP) template of, "Regulate yourselves, we trust you to do what's right." One would think this kind of faux regulation had seen its last day after the drubbing conservatism and free market fundamentalism took in the past couple elections, but apparently we just haven't gotten that "regulation thing" down yet.
Pleading, wishing, and hoping that big business does what's best for the consumer is a sure fire recipe for failure. Granted, FTC officials didn't rule out pushing for stricter legislation, warning that if these companies didn't conform on their own to the guidelines they COULD be in FUTURE trouble IF regulations are passed. Wow, if that doesn't send chills of fear down their corporate spines I don't know what will!!
All joking aside, this is still a step in the right direction, and demonstrates a growing understanding among government officials that privacy on the internet is a growing concern and must be adequately addresses.
So, for educational purposes, as this guy knows a lot more about this issue than I do, let me go to a statement I found by Jeff Chester, Exec. Director, Center for Digital Democracy on the new FTC guidelines and the larger issue of online marketing and data collection:
The Federal Trade Commission is supposed to serve as the nation’s leading consumer protection agency. But for too long it has buried its mandate in the `digital’ sand, as far as ensuring U.S. consumer privacy is protected online. The commission embraced a narrow intellectual framework as it examined online marketing and data collection for this proceeding. Since 2001, the Bush FTC has made industry self-regulation for privacy and online marketing the only acceptable approach when considering any policy safeguards (although the Clinton FTC was also inadequate in this regard as well). Consequently, FTC staff—placed in a sort of intellectual straitjacket—was hampered in their efforts to propose meaningful safeguards.
Advertisers and marketers have developed an array of sophisticated and ever-evolving data collection and profiling applications, honed from the latest developments in such fields as semantics, artificial intelligence, auction theory, social network analysis, data-mining, and statistical modeling. Unknown to many members of the public, a vast commercial surveillance system is at the core of most search engines, online video channels, videogames, mobile services and social networks. We are being digitally shadowed across the online medium, our actions monitored and analyzed.
Behavioral targeting (BT), the online marketing technique that analyzes how an individual user acts online so they can be sent more precise marketing messages, is just one tool in the interactive advertisers’ arsenal. Today, we are witnessing a dramatic growth in the capabilities of marketers to track and assess our activities and communication habits on the Internet. Social media monitoring, so-called “rich-media” immersive marketing, new forms of viral and virtual advertising and product placement, and a renewed interest (and growing investment in) neuromarketing, all contribute to the panoply of approaches that also includes BT. Behavioral targeting itself has also grown more complex. That modest little “cookie” data file on our browsers, which created the potential for behavioral ads, now permits a more diverse set of approaches for delivering targeted advertising.
We don’t believe that the FTC has sufficiently analyzed the current state of interactive marketing and data collection. Otherwise, it would have been able to articulate a better definition of behavioral targeting that would illustrate why legislative safeguards are now required. It should have not exempted “First Party” sites from the Principles; users need to know and approve what kinds of data collection for targeting are being done at that specific online location.
The commission should have created specific policies for so-called sensitive data, especially in the financial, health, and children/adolescent area. By urging a conversation between industry and consumer groups to “develop more specific standards,” the commission has effectively and needlessly delayed the enactment of meaningful safeguards.
On the positive side, the FTC has finally recognized that given today’s contemporary marketing practices, the distinction between so-called personally identifiable information (PII) and non-PII is no longer relevant. The commission is finally catching up with the work of the Article 29 Working Party in the EU (the organization of privacy commissioners from member states), which has made significant advances in this area.
We acknowledge that many on the FTC staff worked diligently to develop these principles. We personally thank them for their commitment to the public interest. Both Commissioners Leibowitz and Harbour played especially critical roles by supporting a serious examination of these issues. We urge everyone to review their separate statements issued today.
Today’s release of the privacy principles continues the conversation. But meaningful action is required. We cannot leave the American public—now pressed by all manner of financial and other pressures—to remain vulnerable to the data collection and targeting lures of interactive marketing.
Stay tuned for President Obama's choice for the next Chairman of the FTC...
Tuesday, February 10, 2009
According to the health-care and drug-industry lobbies, they don't do anything nefarious with our medical records...Scouts Honor! This of course contradicts the fact that literally armies of their high priced lobbyists have been descending on SUBCOMMITTEE hearings being held to determine what kind of privacy protections should be applied to our electronic medical records.
The massive transition to e-health records is a key component of both President Obama's health care proposal as well as the stimulus package itself. The problem seems, just as with the Senate's watering down of all kinds of important stimulus spending in the Recovery Act, so too are they weakening the privacy standards established by the House. The good news is Obama initially supported the privacy measures introduced in the House. The bad news is he hasn't officially endorsed either version to date.
Just one aspect of this privacy debate centers around an issue we at CFC know very well: the selling of prescription records to third party marketers. In fact, we helped torpedo legislation last year designed to allow this insidious practice to become legal in California.
The Washington Post Reports:
The effort to speed adoption of health information technology has become the focus of an intense lobbying battle fueled by health-care and drug-industry interests that have spent hundreds of millions of dollars on lobbying and tens of millions more on campaign contributions over the past two years, much of it shifting to the Democrats since they took control of Congress.
At the heart of the debate is how to strike a balance between protecting patient privacy and expanding the health industry's access to vast and growing databases of information on the health status and medical care of every American. Insurers and providers say the House's proposed protections would hobble efforts to improve the quality and efficiency of health care, but privacy advocates fear that the industry would use the personal data to discriminate against patients in employment and health care as well as to market the information, often through third parties, to generate profits.
Resolving these competing visions will be the task of House and Senate negotiators. The outcome could determine, for example:
· whether a hospital or doctor can make a profit by selling people's medical data, without their consent, to pharmaceutical companies for research;
· whether a hospital or other provider must obtain patient consent before sending them fundraising letters.
Consumer advocates assert that the health industry is already reaping billions by gathering, mining and marketing personal health data and is mainly worried that the privacy provisions would threaten that income stream.
"When a patient walks in the door of a hospital or a pharmacy, that hospital or pharmacy sees not just one dollar sign, but two," said Tim Sparapani, senior legislative counsel for the American Civil Liberties Union. "The first dollar is what they earn from treating the patient. The second is the ability to sell the information about the patient." The risk, he said, is that Americans could face difficulties getting health insurance or a job because of the information available in "for sale" medical records. Industry officials say that they fully comply with federal privacy regulations, which they contend are adequate.
Sparapani said that in the 16 years he has worked on the Hill, he has never seen lines of lobbyists for subcommittee hearings -- let alone votes -- like those on this issue. "There's so much money invested in this and so many corporate entities that are touched by this legislation," he said, "that the lines reminded me of those of the inauguration."
One provision that has generated a great deal of lobbying on both sides would, for instance, bar a drug company from paying a pharmacy to send marketing letters to patients unless the patient consents. The National Association of Chain Drug Stores, which represents CVS Pharmacies and Walgreen's, among others, and which doubled its lobbying spending in 2008 to $1.4 million, opposed the provision. It sent a letter to lawmakers arguing that the restriction might block pharmacists from sending refill reminders.
...when Democrats on the House Ways and Means Committee saw the language, they became concerned it would maintain a loophole that allows drug companies to pay pharmacists to send letters -- at up to $4.50 per letter -- pitching more expensive alternative drugs to their customers. They revised the language to close the loophole. Blunt tried once more to amend the bill to the pharmacists' liking but failed.
Click here to read more.
Friday, February 6, 2009
I want to continue my investigation into the privacy pro's and con's of the recently announced Google Latitude. For a backdrop on the issue, both in terms of the convenience of the tool as well as it's privacy protection shortcomings, check out Wednesday's and Thursday's posts.
I want to assure everyone that I'm no Luddite, and nor are any of the privacy advocates I know and work with. The vast majority of the time, the issue really isn't with the technologies themselves, but rather, the ways in which they COULD be used, and the lack of safeguards to ensure they aren't. I'm not just talking about protecting against identity theft, I'm talking about protecting that which makes us free: our civil liberties and right to privacy.
So the real question is that when such technological advancements are introduced into the marketplace, is someone asking the tough questions, and demanding the proper safeguards?
On that note, I want to reiterate a passage I posted yesterday as well from the Electronic Frontier Foundation that really articulates perfectly what I'm trying to say:
"Technology isn't the real problem, though; rather, the law has yet to catch up to our evolving expectations of and need for privacy. In fact, new government initiatives and laws have severely undermined our rights in recent years."
Now, when I first heard of Google Latitude my immediate reaction was, "Ok, how can this technology be abused? Who could abuse it and why? And what could be done to prevent this?"
As I suspected, Google has once again appeared to have failed the "privacy protection test". I'm not a technology expert, but I certainly know where to find them, and I found this article by Privacy International that raises a host of problems with Google Latitude as it relates to the protection of user privacy.
A few clips from the piece:
After studying the system documentation, PI has determined that the Google system lacks adequate safeguards to protect users from covert opt-in to Latitude’s tracking technology. While it is clear that Google has made at least some effort to embed privacy protections, Latitude appears to present an immediate privacy threat.
Latitude is based on a reciprocal opt-in system. That is, before a person can be tracked, a sharing arrangement must be agreed with a requesting party. After this process has been executed, location data is made available on a time-to-time or continuous basis.
On the face of it, this arrangement might seem an adequate protection. However this safeguard is largely useless if Latitude could be enabled by a second party without a user’s knowledge or consent. Privacy International believes this risk is substantial and could in the future adversely affect millions of phone users.
Privacy International believes Google has created an unnecessary danger to the privacy and security of users. It is clear the company is aware of the need to create a message alert on Latitude-enabled phones but has chosen to launch the service without universal access to this safeguard. The Director of Privacy International, Simon Davies, said:
"Many people will see Latitude as a cool product, but the reality is that Google has yet again failed to deliver strong privacy and security. The company has a long way to go before it can capture the trust of phone users."
"As it stands right now, Latitude could be a gift to stalkers, prying employers, jealous partners and obsessive friends. The dangers to a user’s privacy and security are as limitless as the imagination of those who would abuse this technology."
Check out the rest of the article, particularly the five scenarios Privacy International lays out in which a cover threat could arise.
Thursday, February 5, 2009
Great news! Our coalition's efforts, which spans the political spectrum – including the ACLU, Liberties Union, Electronic Frontier Foundation, California Eagle Forum, Consumers Union, Privacy Activism, Privacy Rights Clearinghouse, and the World Privacy Forum - have successfully broken the story I reported on Tuesday into the mainstream news. Not surprising too, is the full fledged support from State Senator Joe Simitian, a long time privacy stalwart.
To get the whole story on the DMV's attempt to bypass the legislative process in order to establish an “enhanced” biometric identification program in California check out Tuesday's post.
The bottom line is that the DMV and the Department of Finance are seeking to create a massive government database of biometric information from virtually every Californian over the age of 16 without debate or review - raising significant concerns regarding the increased surveillance, monitoring and tracking of individuals.
The good news is that the San Jose Mercury News's Edwin Garcia got wind of this story from us, and wrote an excellent article on it that was published today in both the Merc and Contra Costa Times.
Again, to get the backdrop on Biometrics, and the DMV's end around effort, check out Tuesday's post.
The proposed $63 million contract includes facial recognition software that would allow the DMV to quickly compare an applicant's new photo against other photos in the agency's database in an effort to deter identity theft. The system could eventually include as many as 25 million images of drivers statewide.
"What this would allow law enforcement to do is scan a crowd of folks, check that image against the database and have their names and addresses," said Valerie Smalls Navarro of the American Civil Liberties Union in Sacramento. The ACLU is fighting the proposal with a handful of other groups, including Consumers Union, the Electronic Frontier Foundation and the Consumer Federation of California, which says the plan poses "massive threats" to personal privacy.
"We see this as sort of creeping Big Brother government, an invasion of people's privacy," said Richard Holober, executive director of the San Mateo-based Consumer Federation of California.
Sen. Joe Simitian, D-Palo Alto, perhaps the most outspoken lawmaker when it comes to privacy issues, is urging his colleagues to put the contract proposal before a public hearing, where DMV officials could provide more details about the facial recognition technology.
"There are at least four questions I want to ask," Simitian said. They are: Does the technology work? How much does it cost? Does it make the public safer? How will privacy be protected? "None of those questions should be avoided or evaded by doing an end around the process, which is really what's being proposed here," Simitian said.
Obviously I'll be covering this issue right here as it progresses. In fact, we only have until February 11th to stop this power grab and place this issue in its proper place before the public and the full legislature.
Click here to read the rest of the article.
I also want to add a comment about yesterday's news regarding Google's new "Google Latitude", which will allow users to track their friends as well as be tracked (most notably by Google and advertisers too). One issue I forgot to bring up - amazingly - was the concern that the Federal Government, particularly in light of the retroactive immunity given to telecoms for wiretapping, could demand Google turn over all the tracking information they have on everyday American citizens.
It's not at all hard to imagine such a scenario, and its not at all hard to believe that Google, or any corporation for that matter, would turn over everything they have on you in the blink of an eye. Just something to consider as we enter the Brave New World of new technologies enabling unparalleled invasions of privacy and tracking abilities.
Further, as reported by the BBC, there's another privacy concern we should be aware of:
"Privacy watchdog Privacy International argues that there are opportunities for abuse of the system for those who may not know that their phone is broadcasting its location. Privacy International director Simon Davies gives the example of employers who might give phones to employees with Latitude enabled.
"With Latitude, Google has taken steps toward privacy that it has hitherto not taken," Mr Davies told BBC News.
"The problem is that they launched the services without allowing all phones to be notified." Google admits that the notification service is currently only available for BlackBerry users.
I think the Electronic Frontier Foundation sums this dilemma up nicely (that of technology versus privacy): "Technology isn't the real problem, though; rather, the law has yet to catch up to our evolving expectations of and need for privacy. In fact, new government initiatives and laws have severely undermined our rights in recent years."
Wednesday, February 4, 2009
It seems today's buzz centers around a new service called Google Latitude that's designed to help users share their whereabouts, along with photos and short updates, with a small group of friends and family members.
Now, it appears, at first glance, that there are a number of features built in to protect users’ privacy. For instance, users must opt in to the service, which only shares their location with people they allow to see it. Google only stores a user’s last recorded location, making it impossible for the company to follow someone’s whereabouts over time.
Still, based on the fact that users must connect to their Google accounts to initiate the service –such as those used for Gmail – the location information could be linked to data about other things people do online.
I suspect, and I'm just getting wind of this, this particular loophole may not sit well with some users and privacy advocates already concerned about the volumes of data the search giant keeps.
We should also be honest about this. The real motive behind offering such a tool is the potential advertising boon of delivering ads based on where people are in real-time...just imagine!
I should also say, on numerous occasions I would have found this technology to be of use, like when I'm trying to meet friends at a party or bar, or even say a rock concert, or camping trip.
I found this little post by ReadWriteWeb’s Rick Turoczy on how privacy might be threatened by Google Latitude. He says it’s all about the data:
For millions of users, Google already knows how they search, what they click, what they buy, who they know, how they communicate, and where they go on the Web. Location enables them to add another critical data point - where they are when they’re performing any of those actions. So if you think Google has too much information about you already, you’ve got another think coming.
Long story short, Latitude adds a whole new level of complexity to Google’s understanding of you and your habits. And while we’ll no doubt derive some very interesting benefits from sharing that information, we should hold no illusions about the value of that data to Google and its efforts to run a profitable business.
But it's also a leap of faith as a user, entrusting Google with yet another piece of data that helps them figure out the puzzle of understanding you - and how and where you're likely to perform actions that put money in Google's pocket. It will be interesting to see where Google goes with this one - and interesting to see where you're going, now that we can look over your shoulder.
The UK's Guardian reports:
The new feature, dubbed Latitude, is part of the Google Maps 3.0 software update, and will initially only be available on BlackBerry mobile phones and those devices running the Windows Mobile and Symbian S60 operating systems. It will be rolled out to iPhone and Google Android users in the coming weeks.
Once users sign up to Latitude, an icon representing their position, and the position of friends and contacts, will appear on the Google Maps software on their mobile phone. It can even provide directions to help users navigate their way to their friend’s location, and users can click on a friend’s icon to call, text, and email them, or send an instant message. There is also the option to add a “status update”, so that users can see what their friends are doing.
...Google currently has no plans to make its latitude service work with third-party websites, such as Facebook and Twitter, because of the need to stringently manage the privacy needs of users.
I personally don't know all the ramifications, both pro and con, of this technology yet. I'll be interested to dig a little to find out whether ALL the proper safeguards are included, or whether there might be some areas that are still lacking.
Click here to read the rest of the article.
Tuesday, February 3, 2009
Today I get to report on something we at the CFC are directly involved in.
Let's start from the beginning of what has now been a few days of hurried activity among a coalition of organizations to stop a rather blatant attempt by the California DMV and the California Department of Finance to bypass proper legislative oversite.
On January 14th the California Department of Finance – without notifying the public – sent a letter to inform the state Joint Legislative Budget Committee that it planned to issue a new vendor contract for production of California Driver’s Licenses, ID cards and Salesperson cards starting in June of 2009.
Hidden in the fine print, the proposal called for “enhanced” biometric identification in state IDs. Unless this legislative committee objects to this plan within 30 days, the Department of Motor Vehicles will be free to begin implementing the biometric technology.
What are Biometrics?
Biometric technology is the computerized matching of an individual’s personal characteristics (like a thumbprint or facial scan) against an image or database of images. In other words, the DMV and the Department of Finance are seeking to create a massive government database of biometric information from virtually every Californian over the age of 16 without debate or review - raising significant concerns regarding the increased surveillance, monitoring and tracking of individuals.
One would expect, in light of the ongoing and intensifying debate over the REAL ID Act (a federal plan to create a national identity card based on drivers’ licenses) and the increasing number and degree of privacy violations committed by the federal government in recent years, that such a program would be fully debated, in the open, by our representatives in the State Legislature and with public comment, before it could ever be enacted.
Because no such debate has occurred, and no attention has been given to the privacy concerns such a program warrants, the Consumer Federation of California has joined organizations from across the political spectrum – including the ACLU, Liberties Union, Electronic Frontier Foundation, California Eagle Forum, Consumers Union, Privacy Activism, Privacy Rights Clearinghouse, and the World Privacy Forum - to urge the legislature to reject this request while there’s still time.
Our case against the proposal is twofold.
(1) The first is procedural: the DMV is attempting to use a routine contract renewal process to effectuate major policy changes. As the ACLU notes:
• A 30-day expedited opt-out letter to the Legislature is an inappropriate vehicle to move from photographs and thumbprints of millions of Californians to advanced facial recognition technology and biometric systems that pose a number of privacy and security concerns if not handled carefully.
• The DMV does not appear to have authority to implement biometric technologies that the Legislature has considered and rejected over the years, without the issues being fully considered and addressed in policy and budget hearings.
(2) The second relates to privacy and security: the underlying proposal to use biometric technologies has yet to establish appropriate safeguards to protect against identity theft and unwarranted government snooping into our private lives.
It’s important to understand the limitations of biometrics as well as their strengths. The fact is, biometrics are easy to steal. Our fingerprints are left everywhere we touch, and our iris scans are everywhere we look.
According to experts, biometrics work only if two things can be verified by the verifier: one, that the biometric came from the person at the time of verification, and two, that the biometric matches the master biometric on file. If the system can't do that, it can't work.
Once again, the ACLU provides some critical insights:
• Where a biometric identifier is used as a unique identifier to catalogue personal information about an individual, it would enable the surveillance, monitoring and tracking of individuals. Law enforcement currently has access to DMV’s database of more than 25 million people. It appears that the biometric thumbprints and facial scans from the DMV will be used in criminal investigations. As public and private surveillance cameras become more ubiquitous, the likelihood increases that use of facial recognition devices will go beyond legitimate criminal investigations and become a tool to track and record the movements of innocent people.
• If a biometric database is hacked, an identity thief could substitute his or her fingerprints or facial scan in someone else’s file. Security of any biometric database is a paramount concern that should be addressed in a public legislative hearing process, not by massively expanding the scope of work in a routine contract renewal notice from Department of Finance.
• It is far from clear that biometric imaging as proposed by DMV is required by the Real ID Act. While the Bush Administration’s Department of Homeland Security was pushing for biometric facial image capture, it did not require biometric finger printing. The Obama Administration has already committed to revisiting Real ID. Today, there is no federal impetus behind this move by the DMV.
• How does California pay for the whole new system? The Department of Finance memo speaks of “significantly higher cost of doing business in the current market and economic environment” as compared to the last vendor contract issued ten years ago. At a time when California is going broke, can we afford the immediate projected costs of $4.3 million in this fiscal year, and estimates of $12.5 million per year in each subsequent year?
Which teachers and police officers will be laid off to cover the added costs of a system that the legislature never approved?
Bruce Shneier, president of Counterpane Systems, and author of Applied Cryptography, is considered to be one of the nation’s foremost experts on biometrics. In January of this year he wrote:
“We haven't yet had an example of a large biometric database being hacked into, but the possibility is there. Biometrics are unique identifiers, but they're not secrets.”
“One more problem with biometrics: they don't fail well. Passwords can be changed, but if someone copies your thumbprint, you're out of luck: you can't update your thumb. Passwords can be backed up, but if you alter your thumbprint in an accident, you're stuck. The failures don't have to be this spectacular: a voiceprint reader might not recognize someone with a sore throat, or a fingerprint reader might fail outside in freezing weather. Biometric systems need to be analyzed in light of these possibilities.”
Joint Legislative Budget Committee Must Reject Proposal by February 11!
Our coalition is therefore urging the Joint Legislative Budget Committee to object to the DMV’s proposal to impose sweeping new biometric technologies as an element in a renewal of a vendor contract to produce driver’s licenses and ID cards. A change of this magnitude should be a policy matter for the legislature to decide, after considering whether it is effective, affordable, and if it contains the appropriate privacy safeguards.
For more information on biometrics, check out this article by Bruce Schneier.
Monday, February 2, 2009
Some good news to report on the ongoing debate over how best to protect patient privacy in the brave new world of medical record digitization.
This good news is the New York Times editorialized on the topic on Saturday...adding some much needed attention to this debate. As I mentioned last week, being that the digital transition of our medical records is a key component to both President Obama's health plan AND his economic stimulus package, this debate is an especially important one to have now...while the system is still being planned and constructed.
From the Editorial:
The idea is sound, but it also raises important questions about how to ensure the privacy of patients. Fortunately, the legislation would impose sensible privacy protections despite attempts by business lobbyists to weaken the safeguards.
With paper records the opportunities for breaches are limited to over-the-shoulder glimpses or the occasional lost or stolen files. But when records are kept and transferred electronically, the potential for abuse can become as vast as the Internet.
The potential for harm was spelled out by the American Civil Liberties Union in a recent letter to Congress. Employers who obtain medical records inappropriately might reject a job candidate who looks expensive to insure. Drug companies with access to pharmaceutical records might try to pressure patients to switch to their products. Data brokers might buy medical and pharmaceutical records and sell them to marketers. Unscrupulous employees with access to electronic records might snoop on the health of their colleagues or neighbors.
It should be possible through implementing regulations to fine-tune the privacy requirements so that they do not disrupt patient care. Congress must make every effort to ensure that patients’ privacy is protected.
Click here to read more.