Done Right, Regulation Is Not the Enemy of Innovation

San Francisco has always been a global hub for innovation and entrepreneurship, which comes with its share of pros and cons. Breakthroughs in scientific and medical research, along with tools to optimize production of goods, have improved systems and the quality of life for millions of people. 

Like any unregulated and centralized market, though, Big Tech has created many serious concerns for public security, personal safety and a weakened economy undercut by automation. This is all against the background of an increasing race to monetize every aspect of public and private life, platforming convenience culture at any cost. 

Government has not been able to keep up with this “move fast and break everything” market strategy, and so has largely left it unregulated – or worse, centered tech insiders in determining what amount of oversight is appropriate. Big Tech has been allowed to set the rules of the game, from safety standards and consumer protections to determining what is a public benefit.

Here in San Francisco, the city has been turned into a live testing environment – often without meaningful consent, transparency, or enforceable standards – while the public absorbs increased risks and corporations capture the upside. In the absence of clear rules, companies optimize for speed, scale, and market advantage, even if it means negative consequences for consumers and residents. They claim public streets, infrastructure, data, institutions, and even residents themselves as proprietary sources of extractable value, rather than shared civic assets governed fairly in the public interest. 

The recent Waymo outage is a useful case study, not because autonomous vehicles are uniquely bad or important, but because it made the governance failure impossible to ignore. When power went out and traffic signals failed, cars stalled in intersections and lanes, backing up traffic, stranding people in the rain, and raising obvious questions about emergency response and basic resilience. It was a real-world stress test: what happens when a “smart” system collides with the messy, failure-prone reality of urban infrastructure.

Done right, regulation is not the enemy of innovation – it’s what makes innovation durable, competitive, and broadly beneficial. Rules create an even playing field, reduce uncertainty for everyone (including startups), and force competition to deliver  safety, reliability, and real public value. In the early days of aviation, rigorous certification and safety management systems didn’t kill commercial flight - they helped build public trust and a stable market where new technology could be integrated continuously based on a universal language of standards.

If San Francisco wants to remain a global center of innovation without sacrificing public safety, equity, value or trust, it needs to lead by enforcing reasonable systems of regulations. Here are three ways we could get started today: 

Public resources should require public standards

Companies that rely on public resources - streets, sidewalks, airspace, rights-of-way, public data systems, and the marketable data of residents should not be operating without meeting baseline standards of testing, transparency, and accountability.

With autonomous vehicles, basic operational information is routinely treated as proprietary: how systems perform under stress, what failure modes exist, and even fundamental facts about fleet operations (like how many vehicles are on the streets at any given time). With surveillance tools, the harms are often quieter but no less real: expanded monitoring without clear limits, uncertain data retention and access rules, and weak procurement guardrails that lock cities into systems without meaningful public oversight. With public-sector AI tools, the risk is decision-making that residents can’t see, can’t understand, and can’t contest - even when it affects eligibility for benefits, housing access, medical treatment or public services.

This logic is backwards. Access to public resources should be conditioned on enforceable standards, not promises of future improvement. If a technology cannot operate safely and predictably during system failures and emergency conditions, it is not ready for large-scale deployment, regardless of how well it performs on ideal days.

Why “the long tail” is where safety lives

Tech executives often wave away catastrophic or complex conditions as the “long tail”: rare scenarios that are too improbable to justify rigorous pre-deployment requirements. The argument is always the same: “It would be unfairly burdensome to build regulation around situations that are unlikely to occur.”

But earthquakes, fires, blackouts, communications failures, major accidents, and acts of violence are not abstract hypotheticals. They are exactly when public systems are most strained and when failure carries the highest human cost. Exactly when safety actually matters.

“Unlikely today” does not mean unlikely over the lifespan of a technology. A major power outage in a city like San Francisco is not a surprise - it happens every several years. The likelihood of a major earthquake in the coming decades is high, even though the likelihood of one on any given day is low. We do not evaluate the Fire Department by how well it performs during the majority of days when nothing is burning. 

This is also how other safety-critical domains work. Aviation does not treat safety as a press release or a future aspiration. It treats safety as a lifecycle obligation: formal safety management systems, standardized reporting, and robust, transparent mechanisms to learn from incidents and near misses before they become tragedies. If a technology is safety-critical and operates in public space, it should be evaluated and approved based on resilience, not average performance.

What San Francisco can do differently right now: A standards toolkit for emerging technologies

San Francisco should stop reinventing the wheel for each new product and adopt a repeatable governance layer that applies whenever private systems use public infrastructure, public services, or residents’ lives as input data.

Start with risk-tiering. Not every technology warrants the same scrutiny, but every technology should be classified by potential harm. Low-risk tools get light disclosure requirements. High-risk systems - those that affect physical safety, civil rights, public services, or core infrastructure - face higher bars and ongoing oversight.

Require a pre-deployment “safety case” for safety-critical and rights-impacting technologies. This is not a marketing document. It is a structured demonstration of how the system works, what can go wrong, what safeguards exist, and how performance will be monitored. It should include scenario testing that reflects real conditions: infrastructure failures, communications loss, emergency scenes, high-pedestrian environments, and human edge cases. If a company cannot explain and demonstrate how its system behaves on bad days, it is not ready for scale.

Transparency must be the default. Residents deserve to know what technologies are operating in their neighborhoods and what systems the city is relying on to make decisions. Cities like Amsterdam and Helsinki have moved toward public algorithm registers that disclose what systems are in use and why. The UK has established a standardized transparency framework for public-sector algorithmic tools. Canada requires impact assessment and recourse mechanisms for any automated decision-making tools in government. These are not radical ideas. They are basic governance practices adapted to modern systems.

Make enforcement real. Permits and authorizations should be conditional and renewable. Expansion should be incremental and earned through performance benchmarks. If performance degrades, permissions should be scaled back just as quickly. San Francisco should also build clear triggers for pauses, penalties, and rollbacks – especially for repeat failures, emergency disruption, or refusal to provide auditable data.

Fix governance before building bureaucracy. There is a temptation, when governance fails, to create a new office or department. San Francisco has tried this before. The Office of Emerging Technology was established in 2019 to serve as a “front door” for new technologies. By 2022, it had not issued a single permit.

What San Francisco needs is not another coordinator. It needs a true front door with actual authority: the power to set conditions, require data disclosure, enforce audit rights, coordinate across departments, negotiate cost recovery, and even say no. That function can sit within existing structures, but only if it is insulated from industry influence and judged by measurable outputs - conditions set, audits conducted, incidents tracked, enforcement actions taken, and public reporting delivered.

The OpenGov contract debacle is a cautionary example of what happens when procurement, oversight, and accountability are treated as afterthoughts or “barriers” rather than core functions of city government. San Francisco should be using those failures to redesign how technology contracts are evaluated, particularly when they involve access to sensitive data or core public services. 

Leadership through standards, not slogans. San Francisco does not need to choose between innovation and governance. It can lead by setting clear, enforceable standards that others follow. Cities around the world are already doing this -- conditioning access to public resources on public benefit, safety, and accountability.

If San Francisco is going to remain a place where the future arrives early, it must also be a place where the future is required to work for everyone. That’s not a brake on innovation. It is the minimum any city owes its residents.

____________________________________________________________________________

Michael Redmond is an analyst in the policy areas of environmental sustainability, labor and tech innovation and a contributor to PROPEL.