A bill before New York's city council re-opens questions about how cities can protect residents’ digital well-being

Last March, the Digital Equity Laboratory released a report finding that all of New York City’s major consumer internet service providers (ISPs) failed a test of basic disclosure. Our findings, drawn from extensive analysis of 11 ISPs’ publicly available privacy policies, demonstrated that far from providing consumers with the information they need to make informed decisions about their online privacy, New York’s major ISPs provide only a “take it or leave it” dilemma: get online at your own risk. Our report asked forty-four questions around fifteen indicators of data privacy, covering what data ISPs collect, what data they share (and with whom), and what information they disclose about their data security and practices.

Last month, New York’s City Council introduced a bill which would require some of these ISPs—specifically, legacy cable franchise providers like Charter/Spectrum, RCN, and Altice—to amend their disclosure and privacy-protection practices if they wish to serve NYC customers. The bill, New York City Council Int 1101-2018, would amend the administrative code of the city of New York with a stated goal of  protecting customers’ personally identifiable information.

Local governments should take steps towards transparency and accountability in internet access. The New York bill comes at a time when local and state governments are considering ways to protect consumers in the absence of any federal regulation of internet services following the Federal Communications Commission’s  repeal of the 2015 Open Internet Rules (“net neutrality”) and accompanying privacy protections.

In this article, the Digital Equity Lab reviews the bill in light of recommendations we made in our Take It or Leave It report. Our goal is to translate the dense, technical language of the bill and explain it, while also identifying which of the bill’s provisions track with our “Take It or Leave It” recommendations.

What’s at stake? Our data bodies and our civil rights

The bad news just keeps coming for digital rights and digital privacy. According to TechWorld, “the US accounts for the overwhelming majority of the really big data breaches that have been made public.” In the most recent breach, hackers stole user data from 50 million Facebook users, including log-in credentials across multiple sites including Tinder, Spotify, and AirBnB. The question of the full impact is still rippling out and we’re not likely to know all the consequences for some time. What we do know is that it’s likely that the breach was caught not by Facebook itself—but as a result of stringent regulations placed on internet traffic moving through Europe as part of the European Union's effort to address data privacy protections, known as the General Data Protection Regulation (GDPR).

What are the types of harm this can cause? Just consider the impact of Russian and other actors working online to undermine the integrity of US elections. Some stole the identities of US citizens to create and share false information, others drew on datasets obtained from brokers to target particular messages. Identity theft can also result in financial crimes. Buzzfeed’s Craig Silverman warned about this, explaining the unholy convergence between the tendency of platforms like Facebook to amplify the most controversial and upsetting stories, perverse incentives that come from these clickbait stories generating revenue for advertisers, and the tracking of users’ habits and activity. Altogether, the Cambridge Analytica scandal, reported extensively by The Guardian’s Carole Cadwalladr, is a clear demonstration of how populations could be micro-targeted for political manipulation through data mining.

Then there are the harms that could emerge generally from everyday sharing and brokering of the data we give up constantly as the price of participating in economic, civic, and cultural life: data privacy concerns also center around the way corporations may sell information about users without their knowledge or permission. For example, a common industry practice is the selling of user location data to third parties without the knowledge or permission of those customers. It also became clear recently that it is fairly easy to hack major ISPs and track the location of just about anyone with a cell phone.

Data breaches and data brokering are not the only forms of data privacy risk to ordinary people. Some people maybe losing out based on their “data bodies,” or how they look based on their data. Insurance companies are denying coverage based on data profiling using personally identifiable information (PII) purchased from data brokers—for example, if an analysis of your data shows that you are at risk for a particular disease, insurance companies are legally allowed to deny you coverage, and there is nothing stopping ISPs from selling personal data for this kind of predictive analysis. The same logic could be applied to any process that requires a credit or background check, like renting an apartment, taking out a loan, or applying for a job.

For already-vulnerable populations, the risks and harms are worse: low-income people and people of color are disproportionately targeted by law enforcement for minor infractions, and that this can be deadly, especially for people of color. With the addition of automated criminal justice systems, predictive policing based on flawed historical data, automated decision making systems for child welfare and other critical social infrastructure, as well as face recognition and other digital surveillance tools disproportionately targeting particular vulnerable communities, the structural harms of already-inequitable systems become amplified and deepened. We also know that many technologies are inherently, systematically flawed in ways that impact the lives of people of color: Safiya Noble has shown how search engines provide a discriminatory version of reality, and Latanya Sweeney has shown how online ad delivery is often discriminatory, leading to a sort of internet redlining for particular populations.

Whereas the GDPR now plays a role in protecting the data of European internet users, and California has renewed its net neutrality regulation (for now, despite a lawsuit by the US Department of Justice), US internet users in general are not protected at all by law from risks and harms emerging from internet use. At most, the Federal Trade Commission could hold internet-driven corporations such as platforms (e.g., Facebook, Twitter) and internet service providers (e.g., Verizon, AT&T, Comcast) accountable for misleading consumers in their advertising—but is unlikely to consider other kinds of digital harm.

In the absence of federal consumer protections, it is up to the companies themselves—and, potentially, institutions at other levels of government, to keep people safe from privacy breaches and unethical or harmful uses of data, in order to ensure that people are able to benefit equally from the promise and potential of technology without risking their well-being and their futures.

Digital risk in NYC’s internet ecosystems

Many cities are taking on the job of building internet access—and along with this, grappling with the paradox of digital risk. It is a balancing act, where getting online also brings risk, and where government doing nothing to protect consumers allows industry to dictate the terms of digital participation.

For the last few years, progress toward closing the so-called “digital divide” has leveled off, with about 25-30% of the US population without internet access at home. For cities, this means residents without access to city services—not to mention basic educational and employment opportunities. So, cities and others are stepping up to provide access. Yet along with access comes risk. Cities have four basic mechanisms for protecting their residents’ digital safety as access expands and cities themselves become more networked:

  • Policies governing city-owned and operated facilities and services;
  • Franchise agreements with vendors (contracts between local and/or state government and private service providers to provide information services like cable) that enforce these policies;
  • Franchise agreements that govern vendors’ policies and practices for residential and mobile service within the city’s jurisdiction; and
  • Programming and investment in connectivity, digital literacy, and digital safety.

These mechanisms affect both internet access in public spaces and private home or business internet services. For example, New York City provides internet services at hundreds of public computer centers, a handful of wireless corridors, within the footprint of the Queensbridge public housing development, and other sites citywide including the LinkNYC kiosks. All of these access points are governed by the City’s privacy policies. In the case of LinkNYC, the City requires that the franchise operator (CityBridge) comply with policies including:

  • [LinkNYC does] not sell personal information or share with third parties for their own use, including the City of New York, without explicit consent, except as required by law such as in response to a court order;        
  • With explicit user consent, personal information such as email address or name may be shared with third parties. Examples of this include if you were to fill out a survey about our Services and agree to be contacted, or if an app on the tablet offered you the option to sign up for further communications;
  • Anonymized technical information [e.g., anonymized MAC addresses] may also be shared with third parties to improve the Services;  
  • LinkNYC uses anonymized data, which means information about a user that cannot be tied back to a particular person. LinkNYC uses this anonymized data to understand usage to improve Services and to inform advertising that appears on Link kiosks; and
  • LinkNYC Wi-Fi does not add or insert any advertising onto your personal device.          

Last month, The Intercept reported that a researcher found code on CityBridge’s GitHub repository that could collect information on users’ whereabouts, movements, and other metadata (such as the types of information that Google has already been shown to collect passively from mobile devices). NYC’s Link kiosks had drawn attention in the past for their potential data collection: for example, some have pointed out that the kiosks collect video footage, and that footage could be shared across City agencies (though according to City policy, that data is purged every seven days). Security experts have also questioned whether device identifiers (MAC addresses) can truly be anonymized as promised by the policy. Nevertheless, if CityBridge were to roll out code on NYC kiosks that violated the City's stated policies—like the code found in the repository—they would be liable to action on the part of the City. Samir Saini, Commissioner of the NYC Department of Information Technology and Telecommunications (DOITT), said as much in a statement:

As a public project, LinkNYC can only exist if it conforms to the City’s unambiguous commitment to user privacy. That means the City does not, and will never, allow the network operator — CityBridge — to exploit individual identifiers or precise location of LinkNYC users. If, at any time during our careful oversight of CityBridge, we discover practices that violate the Privacy Policy, we will direct CityBridge to immediately cease and desist from that practice.

NYC and New York State have previously sued Verizon and Charter/Spectrum for franchise violations, so it is not inconceivable that the City might take action if its privacy policies were show to be violated. However, open questions remain: for example, if CityBridge partner Intersection (owned primarily by Sidewalk Labs, which is rolling out the controversial Quayside smart city pilot in Toronto) were to bring the Link kiosks to another city that did not have established privacy policies, how much personally identifiable data could be scraped from every nearby device? And, in a project like LinkNYC where the franchise holder collects revenue (though they do give back to the City to support other broadband access projects), is it fair or right that NYC residents do not have public ownership or control over their own publicly-generated data? Barcelona, for example, is creating a publicly-held data commons generated from smart-city projects, and democratic processes governing the use of that data.

Anyone interested in the digital welfare of NYC communities should take a hard look at how cities collect, control, and share out data that is collected from the public, in public spaces. Yet the City also has a responsibility to protect residents from digital risks and harms emerging from their contracts with franchisees for private home internet service.

Privacy and your private (home or office) internet service

As described above, Digital Equity Laboratory researchers have examined the publicly available privacy policies of New York’s eleven major consumer-facing ISPs: four residential providers (RCN, Verizon, Optimum/Altice, and Spectrum) and seven mobile providers (AT&T, Verizon Wireless, US Cellular, Metro PCS, T-Mobile, Boost Mobile and Sprint Mobile). The research employed a customized version of the Corporate Accountability Index methodology developed by the Open Technology Institute’s Ranking Digital Rights Project, scoring ISPs based on 44 questions relating to 15 digital privacy indicators.

Scoring methodology from Digital Equity Laboratory, "Take It Or Leave It"

The maximum possible score for an ISP was 44. Yet according to our audit, New York’s ISPs all received failing grades, reaching an average of 12.85 (residential) and 11.3 (mobile) out of 44 points, or 29% and 26% respectively. Verizon Wireless obtained the highest score with 14.5, and Sprint Mobile the lowest with 8.0. Seven out of the eleven ISPs were within the 11-13 range.

Thus our research indicates that self-regulation among residential and mobile ISPs regarding transparency, data management, and privacy protection has created an industry standard of extracting uninformed consent from consumers.

NYC Internet Service Providers' privacy scores, from Digital Equity Laboratory, "Take It Or Leave It"

Specifically, we found that:

  • While New York City has eight official languages, providers’ policies are only available in English and Spanish (and in one case French), locking out almost one million New Yorkers who have limited proficiency in both English and Spanish;
  • All reviewed policies allowed third-party data sale and sharing without disclosure to users;
  • All reviewed privacy policies employ overly broad and legally vague language. For example, ISPs will collect any and all data useful “for business, tax or legal purposes” or “if it is reasonably necessary to satisfy any applicable law, regulation, legal process or enforceable governmental request,” making it difficult for consumers to understand the consequences of their choices in practice;
  • All policies used overly broad terms to describe types of data collected. Lists of non-specific data categories for which data collection and sharing are allowed include “personal identifiers,” “device information,” “website usage,” “credit card information,” “location information,” and “content of emails, call records, video recordings,” etc., implying that essentially no category of data is exempt from collection;
  • No policy laid out a process for obtaining meaningful consent to collect, share, or sell consumer data. For example, although some policies refer to “necessary consent,” none defines “necessary consent” nor specifies what steps will be taken to obtain consent. “Necessary consent” may be simply defined as the act of agreeing to the terms of a contract at sign-up;
  • No policy committed to inform consumers of data transfers of ownership resulting from company sales or mergers. Thus, even if a consumer were to select a provider based on its stated data privacy policies, their data could be transferred to another company without their knowledge or consent;
  • All except one provider failed to inform users how long their data is retained and what it is used for after services are terminated. Thus, if a consumer were to close his or her account due to violations of privacy or changes to a privacy policy, data collected from that consumer collected prior to termination of their contract would still be available to the provider to share or sell;
  • None of the policies indicate what legal or other standard is used to determine whether to provide data in response to legal proceedings. They merely say that they will “comply with legal proceedings”;
  • All except one provider fail to indicate that users will be contacted in case of a data breach. None provides information on how users would be contacted in case of a data breach, nor regarding what steps to limit harm or make practices more secure would be taken in case of such event; and
  • No major provider in New York offered a comprehensive or selective opt-in or opt-out approach for data collection and sharing.

Our analysis found that it would not be possible for consumers to understand the consequences of their choices; e.g., to decide how and if they are comfortable with their browsing history, personally identifiable information, and other data being tracked, sold, or shared as permitted by stated policies. Furthermore, users are not informed of any standard practices regarding liability, litigation, or mandatory arbitration in the case of digital privacy harms, and have no recourse if they are harmed as a result.

Sample indicators from Digital Equity Laboratory, "Take It Or Leave It
Overall scoring from Digital Equity Laboratory, "Take It Or Leave It"

To address these flaws, our recommendations prioritized easily accessible customer-facing policies with specific information to enable informed choice, including:

  • Policies available in multiple languages - Chinese-Cantonese, Chinese-Mandarin, Haitian Creole, Bengali, Russian, and Korean - to better serve the 967,370 New Yorkers that do not speak English or Spanish;
  • Conspicuous, persistent, specific, plain, and appropriate language to enable informed customer choice;
  • Required direct privacy policy updates and change logs available to the consumer within a reasonable period of time;
  • Standard disclosure practices regarding third-party sharing, sale, retention, and treatment of customer data;
  • Required periodic security audits of personnel and technology systems managing sensitive customer data, with consumer announcements that security audits have occurred;
  • Disclosure of legal or other standards to determine whether or not the ISP will use to respond to requests for customers’ data and information;
  • Increased consumer control, including the ability to opt-in and opt-out of practices that endanger people;
  • Specific requirements for informed consent that must be satisfied before companies may use data for particular purposes, including sharing with third parties (with an exception for law enforcement, if it has demonstrated that disclosure would endanger an ongoing investigation);
  • Standard limitations on the retention of customer data, and on use of retained customer data;
  • Required disclosure to affected users and standard mitigation protocols in the case of data breaches;
  • Required notification and opt-out options with regard to data transfer in the case of a company sale or merger, barring ISPs from exercising a “take-it-or leave-it” approach;
  • For legal proceedings, disallowing mandatory arbitration clauses to decide privacy disputes and allegations of harm deriving from data;
  • The City, ISPs, and other partners should continue to create municipal or public-private programs, including: Awareness campaigns so that consumers better understand their choices with regard to digital risks and safety;
  • Independent, ongoing public audits of providers’ stated practices and policies, including ongoing reporting on the evolving privacy practices of providers;
  • The creation of Online Privacy Policy Forums where customers can ask direct questions that companies must answer -- such a platform would allow customers to directly engage in the protection of their own privacy rights.

City Council Bill Int 1101, specifically designed to apply to cable internet providers like RCN, Optimum/Altice, Spectrum, and Verizon (though not mobile providers), includes many important privacy-protecting measures, as recommended by the DEL. The bill includes several important provisions:

  • Cable internet providers must obtain explicit opt-in consent for data collection (not merely an opt-out option with data collection set as a default);
  • To obtain opt-in consent, providers must provide customers with advance notice and an explanation of the type and purpose of data collection;
  • Providers are disallowed from leveraging any penalty (monetary or service-related) for customers who opt-out of data collection;
  • Providers are required to provide plain-language explanations in all of their customer notices, legible and viewable/accessible in multiple languages and in a manner accessible to people with disabilities;
  • These notices must include regular updates to providers’ privacy policies themselves;
  • ISPs must provide a City-approved interfaces allowing customers to communicate with the company, lodge complaints, and learn about providers’ privacy policies;
  • Providers must comply with limits as to who has access to customers’ data, and how it is stored and managed;
  • The City would be able to audit these data storage, management, and personnel practices as needed;
  • Customers would be able to demand that providers purge their personal data if service is ended; and
  • Providers would also have to dispose of personally identifiable information (PII) after a certain time period has passed.

Importantly, these provisions would apply not only to the cable providers themselves, but also to third-party companies, such as data brokers or content partners.

Lacking from the bill is any mention of required disclosure in the case of a data breach; nor is there any mention of what specifically what should happen to data, or what notifications should be provided to customers, if a provider is sold or merges with another. Yet overall, the bill would address many of the flaws we found in our March report.

Building toward an expectation of privacy protection

The failure of the market to provide meaningful choice to customers, in particular to vulnerable populations who are at particular risk of digital risks and harms, creates a serious problem for consumer health and safety. Users, especially those from vulnerable communities, are increasingly aware of the dangers of digital participation, and have grounds to demand reform.

Municipalities also have a responsibility to increase digital safety and security for residents. As the nation’s largest city with a predominant percentage of vulnerable populations, New York City has an opportunity to shape municipal strategies nationally and to provide incentives for ISPs to improve their privacy policies and consumer protections and customize them to fit the needs of particular user populations.

On a broader level, Digital Equity Laboratory recommends that any new legislation apply to all ISPs regardless of how they are classified (cable, DSL, fiber, and mobile). As it is, one class of service might be regulated more than others, creating inequities in parts of the City where cable service is not available. We would also recommend that privacy provisions like these apply to City services—not only internet access points, corridors, and kiosks, but also any systems built to platform smart-city infrastructure.

Additionally, when it comes to data collected from City-owned and public internet access points, DEL advocates for public access to and decision-making around publicly gathered data. As with the internet generally, the primary revenue source for franchises like CityBridge/LinkNYC is advertising—that’s why the kiosks look like digital broadsides. Any data collected from the Links or any other internet-based system can be used to diagnose technical problems, improve performance, and to predict how ads will perform in particular locations. This should not stop the City from requiring that franchise holders open anonymized data for use by the City and the public. This could happen through designation of data created by City systems and vendors as a publicly held asset, especially as the City adopts smart technologies.

In addition to local governmental action to better protect residents’ data privacy,  ISPs also have an opportunity to model policies and practices that could make their services more attractive and competitive for privacy-aware consumers.

The non-profit and public interest sector can also contribute to overall digital health by creating:

  • Awareness campaigns so that consumers better understand their choices with regard to digital risks and safety;
  • Independent, ongoing public audits of providers’ stated practices and policies, including ongoing reporting on the evolving privacy practices of providers; and
  • Formulation of an education, training and outreach plans, e.g. to support the engagement of vulnerable populations in the 2020 digital Census with their privacy protected.

Overall, our analysis of digital privacy in New York City shows that we are beginning to have an important public debate about data privacy, and that local legislative efforts can indeed bring greater data privacy protections to residents. We invite feedback and other interpretations, and most of all hope that the public will take note and join the discussion about how best to protect data privacy. Given the real need for digital participation in order to participate in government services, the economy, and educational and employment opportunities, digital access should never be a take-it-or-leave it proposition. Such a situation risks leaving vulnerable and marginalized communities behind—or asking them to shoulder the burden of risks for themselves and their families—which could further deepen and reinforce inequities that divide us and undermine democratic ideals.