APIs

A new way to write to the White House

Official White House Photo by Pete Souza

Official White House Photo by Pete Souza

The White House has officially released the write version of the “We the People” application programming interface that now allows developers to feed data back into the petition platform via third-party applications.

“Starting today, people can sign We the People petitions even when they’re not on WhiteHouse.gov,” writes White House Director of New Media Technologies Leigh Heyman on the White House blog. “Now, users can also use third-party platforms, including other petitions services, or even their own websites or blogs. All of those signatures, once validated, will count towards a petition’s objective of meeting the 100,000-signature threshold needed for an official White House response.”

According to Heyman, more than 16 million users have created and/or signed more than 360,000 petitions.

Apply for access to the Petitions API.

Help get USDA to lead with APIs when it comes to America’s parks

Photo: USDA

Photo: USDA

I need your help with something.

I’m in the business of helping startups, all the way up to our federal government, identify valuable assets and make them more accessible for use in websites and mobile devices. As part of this work I’m always on the look out for valuable public assets across city, state and federal government, and help make sure the conversations around these assets always include application programming interfaces, so that we aren’t just building web and mobile applications in silos, and limiting the potential for public access by individuals and small businesses.

Over the last couple years, I have cultivated a small group of API evangelists in various sectors who help keep me informed of important government projects. One of my partners in crime, Alyssa Ravasio (@alyraz), brought a pre solicitation for a request for proposal to my attention from the U.S. Department of Agriculture, for a “Recreation On Stop Support Services” to my attention. You can see this service in action at Recreation.gov. In short, this is an ongoing contract that our federal government engages in with a private sector contractor to provide access to our nation’s campgrounds and campsites at our federal parks.

I would put our national parks and campgrounds at the top of treasured assets that this country possesses, and something that we need to make sure continue to be as accessible to individuals and small businesses as possible. I want to take the time to respond to the pre-solicitation from USDA, in hopes of helping them craft a better RFP before this goes out to bid.

I also want to encourage others, who are also passionate about camping and our national parks, to submit their own feedback to USDA’s solicitation number AG3187S151000–Recreation One Stop Support Services.

The current incarnation of the one stop service for recreation project has an API. Well, it has web services as part of the existing Recreation Information Database, but you also find mentions of API in the proposed RFP, under the 5.3.9 Mapping and Geospatial Capability:

A segment of the user community will not only be accessing and viewing the mapping data through the web interface at www.recreation.gov, but will also have a need to search, retrieve, and download the geospatial data via the RIDB data sharing interface(s), or APIs. The Contractor shall demonstrate utilization of programming languages, formats, data types/conventions/sources, etc. which maximize the portability and usability of maps and map related data by third-parties. This minimizes the need for third-party data consumers to procure expensive and/or proprietary software to utilize, edit or manipulate the mapping data. The contractor shall ensure that all available geospatial data within R1S is incorporated into the user interface to provide users with the most complete mapping solutions available.

There is another mention of APIs under “5.3.6 Provide Travel Planning Tools” with:

Non Federal reservable inventory may include state and local reservable and non-reservable recreation inventory which may be incorporated into the system or accessed via various application programming interfaces (APIs).

Then, towards the end you see the following needs:

  • Data Sharing and Data Management – Information originating from within RIDB and imported shall be presented in a manner such that the external origin of the data is transparent to users and consistent with presentation of information originating from within.
  • Information Sharing – The Contractor shall deliver automated and manual services for the sharing of the consolidated recreation information described herein. The sharing service shall utilize prevailing industry best practices and technologies to make the consolidated recreation information publicly available to data consumers.
  • Information Consolidation – The Contractor shall deliver automated and manual services for collecting of a wide variety of recreation information from a wide range of sources including Federal Government Agencies and Federal Partner organizations such as Historic Hotels of America.

To me, all of this infers that whoever wins the contract will need to at least maintain what is available via the RIDB, which is nice, but it is just a web service, not a modern web API, and only reflects providing access a small portion of the valuable resource that will need to be made available via Recreation.gov to partners and the public at large. We can do better.

An API needs to be a first-class citizen in this RFP, something that is not just an afterthought and a connection to the database that just needs to be maintained — an API will be central to deliver the resources needed for a “one stop services for recreation”.

Camping resources

When you look through the requirements for this project, you see all the building blocks for allowing individuals to engage with campsites across our nation, inventory, ticketing, lotteries, permitting, day use and group facilities, special events and use, equipment rentals, recreational and pass sales. All of this keeps our national parks accessible to campers.

Mapping and geospatial resources

Camping resources are meaningful to us because of where they are located. Mapping and geospatial resources will be vital to enabling users to discover the parks and campsites where they wish to stay on their camping trips. The ability to search and navigate campsites via modern mapping and providing geospatial resources is central to the one stop service for recreation.

Financial resources

A third cornerstone of the recreation one-stop support services platform are the financial resources. The ability to accept and process credit cards, cancellations, modifications and refunds will play a role in every engagement on the platform. Access to to all aspects of commerce via the platform will make or break operations at every level for the one stop service for recreation.

User resources

The fourth corner of the one stop service for recreation platform is the user. Providing secure, seamless access to user profiles and history throughout the Recreation.gov experience will be critical to it all working. User resources are central to any modern web or mobile application in 2014 and depends on camping, geo and financial resources to bring it all together.

Recreation.gov website

All of these resource need to be made available on at Recreation.gov, in the form of fully functioning web application providing a content management system (CMS) for managing the site, complete with a reservation system that allows users to discovery camping resources, via mapping geospatial resources, and pay for their reservations via financial resources—all via a secure, easy to access user profile.

Travel planning tools

Beyond the core public website and its reservation system the requirements call for travel planning tools including point-to-point itinerary planning, field sales and reservations sales, third party sales and reservable inventory management. While these tools are likely to primarily be web-based, the requirements call for remote sales and management of inventory, which requires a central API solution.

Robust reporting solutions

Reporting is prominent through the requirements requiring web analytics, canned reports, customizable reports, monthly performance reporting, and enterprise reporting systems. In short, reporting will be essential to all aspects of the one stop service for recreation for both the contract winner and operator, as well as all of the agencies involved. I would add in that end-users will also need some views into their own Recreation.gov usage, adding another layer to the conversation.

Data accessibility and portability

Another element that is present throughout the requirements is data access and portability of the camping, reservation, customer (user) and resulting financial data. Pretty much everything in the system should be accessible, right? This definitely goes beyond just what is available via the RIDB and requiring all aspects of the one-stop service for recreation to be accessible for use in any other system, or just as download.

Minimum required applications and platforms

As part of the requirements, there is a minimum requirements for browsers, and mobile platforms that the one stop service for recreation should operate on:

  • Applications:
    • Microsoft Internet Explorer
    • Mozilla Firefox
    • Google Chrome
    • Apple Safari
  • Platforms:
    • Android Mobile Devices
    • Apple iOS Mobile Devices
    • Apple Desktop OS
    • Microsoft Windows Mobile Devices
    • Microsoft Windows PC

For me, this portion of the requirements are extremely vague. I agree that the Recreation.gov and travel planning tools should work in all these browsers and be responsive on mobile browsing platforms, but these requirements seem to allude to more without actually going there. In 2014-2016, you can’t tell me that native mobile experiences won’t be something end users of Recreation.gov and the overall system will be demanding? Mobile will be the gateway to the platforms mapping and geospatial resources, and APIs are how you deliver the resources needed for developing mobile applications on all of the platforms listed.

A one-stop service for recreation requires a multi-channel approach

The one-stop service for recreation requires a multi-channel approach to deliver the camping, mapping and geo, financial and user resources to the main Recreation.gov website, supporting reservation system, travel planning tools, reporting solutions and potentially any other web or mobile applications, and round it all off with needing to provide the data accessibility and portability that is demanded of a modern federal government platform.

Transform the RIDB web service into a modern API and make the center

The only way you will engage with users Recreation.gov on the web and mobile locations they will demand, as well as deliver the reservation, travel planning and reporting solutions, while making sure everything is portable, the RIDB database and supporting web services needs to be brought up to date and transformed into a modern web API. The RFP_-_Attachment_10_-_Notional_Future_RIDB_Data_Flow_-_Sep._22_2014.pptx even reflects this vision, but the actual RFP severely falls short of describing the role an API will play in the platform.

Making security a priority across all aspects of operations

With a single API layer being the access layer for all web, mobile, reporting and data accessibility channels, security takes on a whole new meaning. APIs provide single access point for all access to platform resources, allowing security and auditing to occur in real-time, using modern approaches to API management. APIs use existing web technology and don’t require any special technology, but do provide a single layer to enable, monitor and deny access to all resources, across the one-stop services for recreation.

A centralized metrics, analytics and reporting layer

Married with security, an API-centric approach allows for a platform-wide analytics layer that can drive security, but also provide the data points required for the overall platform reporting system. With an API analytics layer this data can be delivered to platform, government, developer and end-user reporting solutions. A comprehensive metrics, analytics, and reporting strategy is essential to a modern one stop services for recreation platform.

APIs let in the sunlight, enforcing transparency and accountability

One critical aspect of a modern, API-centric approach is the transparency and accountability it provides. All database connections and web or mobile applications are brokered via an API layer, requiring all applications to rely on the same pipes for reading, writing, modifying and removing data from the system. This approach provides a secure window into all aspects of platform operations, providing the accountability requirements set forth in the proposed RFP.

Some constructive criticism for language in the proposed RFP

There are some points in the proposed RFP that, in my opinion, move the conversation backwards rather than looking to the future and fulfilling a vision of what the “one stop services for recreation platform” could be.

One example is “5.3.6.3 Third Party Sales”:

The Government anticipates that during the potential 10-year period of performance after go-live there may be an interest by commercial travel and recreation planning companies (such as Travelocity, Expedia, etc.) to utilize R1S inventory and availability data to facilitate R1S transactions. The Government may be open to such Third Party Sales arrangements, provided the business, financial and legal terms of the arrangement are in the Government’s best interests. At the Government’s request, the Contractor shall lead exploratory meetings, including Government and applicable industry representatives, to determine commercial interest in such a service and also explore the technical and fiscal feasibility of such Third Party Sales arrangements. Upon the conclusion of all necessary exploratory meetings, and should the Government decide to implement Third Party Sales, the Government will issue a formal modification to the R1S Support Services contract to incorporate the service. There is no requirement for the contractor to provide Third Party Sales services until a post-award modification is issued requiring them.

“There may be an interest by commercial travel and recreation planning companies (such as Travelocity, Expedia, etc.)?” In the current online environment, I am pretty sure that this shouldn’t be “may,” this should be “will,” and that the pace of business today moves quite a bit faster than this section describes. APIs enable a new approach to business development that was coined 10 years ago by the co-founder of Flickr, the popular sharing platform.

I understand that you will have to certify developers, but this should not get in the way of small business developers experimenting and innovating with the resources available via the platform. Business development via an API has been going on in the private sector for 10 years and isn’t something USDA should be putting off until the next procurement cycle for the platform.

The proposed RFP states:

The current contract is a Variable Quantity contract with Fixed Price per-transaction pricing such that the Contractor earns a fixed fee for each transaction processed via the www.recreation.gov website, the telephone customer support line, or in-person at participating Federal recreation locations.  The fixed fees under the current contract vary depending on the type of inventory being reserved, the sales channel, etc.

In an era where we have over 10,000 API resources to use in applications, where cloud-based utility pricing is successfully being applied, and you pay for what you use across numerous industries, you can’t have a business model like this for a public-private sector partnership, and not put APIs to use, and enabling trusted partners to build a business model around the commerce opportunities available via a platform. Right now, this is exclusive to the current contract holder, Active Network, d.b.a. ReserveAmerica, and for the next round should not be something that is exclusive to the contract winner. It should be open to the thousands of businesses that serve the parks and recreation space.

Another item I have an issue with is ownership over the source code of the platform:

The Government owns all of the inventory, transaction and customer data associated with Recreation.Gov; however the reservation system remains the sole proprietary property of the current service provider.

I know this is the current situation, but I strongly urge that for the next relationship, this is not the case. The government should own all of the inventory, transaction and customer data and the reservation system should be open source. Period. The chances that there will be innovation via the platform if it remains a proprietary solution is slim and requiring that the core reservation system and supporting API platform remain open source will stimulate innovation around Recreation.gov. Any business, including the winner of the contract, can still build proprietary solutions that use the API to deliver specific applications, or end-user solutions and premium features, but the core software should be open this time.

Some praise for what is in the proposed RFP

I am not one to just criticize without providing at least some praise, acknowledging where USDA is looking towards the future. One area I have to note is their inclusion of the U.S. Digital Services Playbook, where #3 on the checklist for deploying in a flexible hosting environment is “resources are provisioned through an API,” and #2 on the checklist for defaulting to open says that you should provide data sets to the public, in their entirety, through bulk downloads and APIs (application programming interfaces). I think the U.S. Digital Services Playbook should be included with ALL federal government RFPs, and it makes me happy to see it in here.

I also understand the number of agencies involved in the project include USDA Forest Service, U.S. Army Corps of Engineers, U.S. Department of the Interior (including at least the National Park Service, Bureau of Land Management, and Bureau of Reclamation, Fish and Wildlife Service), the National Archives and Records Administration and other potential federal partners to be determined at a later date.

Agencies involved in the trip planning component include the agencies named above, as well as the Department of Transportation; Federal Highway Administration, the Department of Commerce; National Oceanic and Atmospheric Administration, the Tennessee Valley Authority and Smithsonian Institution. This is a huge undertaking, and I commend them on the effort, but can’t also help myself in throwing in that this is all the more reason the one stop-services for recreation has to have an API. ;-)

In closing

I think I’m finished with my response to the Department of Agricultures’s solicitation number AG3187S151000–Recreation One Stop Support Services. I would like to request that they extend the deadline for response for another 30 days, to allow me to drum up more feedback, as well as put some more thought into the RFP myself. Following your own lead of including the U.S. Digital Services Playbook, I would also recommend bringing in:

  • 18F – Building on the U.S. Digital Services Playbook, 18F has an amazing set of resources to help USDA move forward with this RFP in a way that is in line with other leading platform efforts in the federal government—please make sure 18F is involved.
  • Round 3 PIFs – This is probably already in the works, but make sure there is someone from the recent round of Presidential Innovation Fellows are on board, helping make sure this project is headed in the right direction.

Ultimately, I think all the right parts and pieces are present for this project, but when it comes to  finding the right language, and vision for the RFP, it needs a lot of help, and I’m confident that 18F and PIFs can help bring it home. The most important element for me is that web service from the RIDB needs to be transformed into a modern web API and brought front and center to fuel every aspect of the platform. I’m confident that if the RFP speaks the right language, the winning contractor will be able to translate the RFP into the platform it needs to be, and serve the expectations of the modern consumers who will be using the platform in coming years.

If this RFP goes out the door without an API vision, planning of the family camping trip will done in a silo, at Recreation.gov, and not on the mobile phones that are ubiquitous in our everyday life. User do not need a web experience that is translated to function in mobile browsers, they need a web experience when it makes sense, and a native mobile experience, and even offline when necessary.

Planning the family vacation will not happen from just Recreation.gov. It needs to be part of a larger vacation planning experience, occurring via the online platforms we already use in our daily lives. If USDA focuses on an API-first vision for delivering the one stop service for recreation, it will live up to its name—truly being a one-stop service that can be used for planning your recreation from any platform or device.

Call to action

Visit USDA’s solicitation number AG3187S151000–Recreation One Stop Support Services and ask them to extend the comment period, and let them know how important it is that the platform is API-centric. Feel free to send me a link to your of feedback, and I’ll add a list of relevant stories to the bottom of this post as I find them.

GovFresh guide to openFDA

Photo: FDA / Michael J. Ermarth

Photo: FDA / Michael J. Ermarth

The U.S. Food and Drug Administration’s openFDA initiative aims to “make it easier for web developers, researchers, and the public to access large, important public health datasets.”

openFDAInitial areas of focus include adverse event and recall enforcement reports and product labeling data (see “Drugs,” “Devices,” “Foods“). It currently provides an application programming interface of information that was previously unavailable or easily-accessible to the public, including millions of reports submitted to the FDA from 2004 to 2013.

History

The U.S. Department of Health and Human Services identified “Open FDA” as a priority in its open government “Plan Version 3.0.” Led by the FDA’s Office of Informatics and Technology Innovation, the project began in March 2013 and launched a public beta in June 2014.

Timeline

  • March 2013: Taha Kass-Hout named FDA chief health informatics officer.
  • March 2013: Work on openFDA begins.
  • July 2013: White House Presidential Innovation Fellows join to support openFDA development.
  • December 2013: FDA creates the Office of Informatics and Technology Innovation lead by Kass-Hout.
  • June 2014: openFDA launches in beta.

Quotable

“The openFDA initiative leverages new technologies and methods to unlock the tremendous public data and resources available from the FDA in a user-friendly way. OpenFDA is a valuable resource that will help those in the private and public sectors use FDA public data to spur innovation, advance academic research, educate the public, and protect public health.”

– FDA Chief Operating Officer and Acting Chief Information Officer Walter S. Harris

“In the past, these vast datasets could be difficult for industry to access and to use. Pharmaceutical companies, for example, send hundreds of Freedom of Information Act (FOIA) requests to FDA every year because that has been one of the ways they could get this data. Other methods called for downloading large amounts of files encoded in a variety of formats or not fully documented, or using a website to point-and-click and browse through a database – all slow and labor-intensive processes.”

– FDA Chief Health Informatics Officer Taha Kass-Hout

Developers

Connect

References

Models for API driven startups built around public data

I had a conversation with a venture capitalist recently who was looking for information on startups who had APIs and had built their company around public data.

The two companies that were referenced in the original contact email were companies like Eligible API and Clever API: Two similar, yet very different, approaches to aggregating public and data into a viable startup (Clever used to aggregate data from school districts, but now just provides login services).

During our conversation, we talked about the world of APIs and open data, both government and across the private sector. I spent 30 minutes helping them understand the landscape, and told them that when I was done I would generate of list of APIs I thought were interesting, and that I would categorize into a similar space as Eligible and Clever, something that was much more difficult to quantify than I expected. Nonetheless, I learned a lot and, as I do with all my research, I wanted to share the story of my experience.

I started with the companies, that off the top of my head, had built interesting businesses around publicly available facts and data, a definition that would expand as I continued.

I started with a couple of APIs I know provide some common data sources (via APIs):

Next, I wanted to look at a couple of the business data APIs I depend on daily, and while I was searching I found a third. What I think is interesting about these business data providers is their very different business models and approaches to gathering and making the data available.

Immediately after looking through Crunchbase and OpenCorporates, I queried my brain for other leading APIs who are pulling content or data from public sources, and developing a business around them. It makes sense to look at the social data and content realm, but this is where I stop. I don’t want to venture too far down the social media rabbit hole, but I think that these two providers are important to consider.

While Twitter data isn’t the usual business, industry, product, and other common facts, it has grown to be the new data that is defining not just Twitter’s business, but a whole ecosystem of aggregators and other services that are built on consuming, aggregating and often publishing public raw or enriched social data.

I wanted to also step back again and look at Clever, and think about their pivot from aggregating school data to being a login service. There was another API that I was tracking on who offered a similar service to Clever, aggregating school data, that I think is important to list alongside with Clever.

As far as I know, both Clever and LearnSprout are adjusting to find the sweet spot in what they do, but I keep them on the list because of what they did when I was originally introduced to their services. I think we can safely say that there will be no shortage of startups to come, following in Clever and LearnSprout’s footsteps, unaware of their predecessors, and the challenge they face when aggregate data across school districts.

Healthcare data

After taking another look at Clever, I also took another walkthrough at the Eligible API, and spent time looking for similar data driven APIs in the healthcare space. I think that Eligible is a shining example of what this particular VC was looking for, and a good model for startups looking to not just build a company and API around public data, but do it in a way that you can make a significant impact on an industry.

I know there are more healthcare data platforms out there, but these are a handful of ones that have APIs that I track. Healthcare is one of those heavily regulated industries where there is huge opportunity to aggregate data from multiple public and private sectors sources and build an API-driven business from.

Energy data

After healthcare, my mind immediately moved into the world of energy data, because there is a task on my task list to study open data licensing as part of a conversation I’m having with Genability. I think what this latest wave of energy API providers, and the work they do with the data of individual customers, but also wider power companies, and state and federal data, is very interesting.

When I was in Maryland this last May, moderating a panel with folks from the Department of Energy, the conversation came up around the value of the Department of Energy data, to the private sector. I’d say that the Department of Energy data is in the top five agencies when it comes to viability for use in the private sector and making a significant economic impact.

Libraries

Pushing the boundaries of this definition again, I stumbled onto the concept of launching APIs for libraries, built around public or private collections. While not an exact match to other APIs is this story, I think what DPLA is doing, reflects what we are talking about, and about building a platform around public and private datasets (collections in this case).

Just like government agencies, public and private institutions possess an amazing amount of data, content and media that is not readily available online and provides a pretty significant opportunity to build API driven startups and organizations around these collections.

Scientific data

There is a wealth of valuable scientific data being made available via public APIs, from various organizations, and institutions. I’m not sure where these groups are gathering their data from, but I’m sure there is a lot of public funding and sources included in some of the APIs I track.

These are just two of the numerous scientific data APIs I keep an eye on, and I agree that this is a little out of bounds of exactly for what we are looking for, however, I think that the opportunity for designing, deploying and managing high-performing, high-value APIs from publicly and privately-owned scientific data is pretty huge.

Government data

As I look at these energy, and scientific APIs across my monitoring system, I’m presented with other government APIs that are consumer focused and often have the look and feel of a private sector startup, while also having a significant impact on private sector industries.

While all of these APIs are clearly .gov initiatives, they provide clear value to consumers, and I think there is opportunity for startups to play around in offering complimentary, or even competing services with these government-generated, industry-focused open data–going further than these platforms do.

Quasi-government data

Alongside those very consumer, industry oriented government efforts, I can’t help but look at the quasi-government APIs I’m familiar with that are providing similar data-driven APIs to the government ones above.

While these may not be models for startups, I think they provide potential ideas that private sector non-profit groups can take action on. Providing mortgage, energy, environmental, or even healthcare services, developed around public and private sector data, will continue to grow as a viable business model for startups and organizations in coming years.

Watchers of government data

Adding another layer to government data, I have to include the organizations that keep an eye on government, a segment of organizations that have evolved around building operational models for aggregating, generating meaning from, then republishing data that helps us understand how government is working (or not working).

These are all nonprofit organizations doing this work, but when it comes to journalism, politics, and other areas, there are some viable services that can be offered surrounding, and on top of the valuable open data being liberated, generated and published by the watchers of our government.

School data again

One interesting model for building a business around government data is with Great School. There are some high-value datasets available at the Department of Education as a well as Census Bureau, and using these sources have presented a common model for building a company around public data:

I’m not exactly a fan of the Great Schools, but I think it is worthy of noting. I’ve talked with them, and they don’t really have as open of a business model and platform as I would like to see. I feel it is important to “pay it forward” when building a for-profit company around public data. I don’t have a problem with building businesses and generating revenue around public data, but if you don’t contribute to it being more accessible than you found it, I have a problem.

News

After spending time looking through the APIs I monitor, I remembered the use of public data by leading news sources. These papers are using data from federal, state and city data sources, and serving them via APIs, right alongside the news.

These news sources don’t make money off the APIs themselves. Like software-as-a-service providers, they provide value-add to their core business. Census surveys, congressional voting, economic numbers, and other public data is extremely relevant to the everyday news that impacts us.

Been doing this for a while

When we talk about building businesses around publicly available data, there are some companies who have been doing this a while. The concept really isn’t that new, so I think it is important to bring these legacy providers into the conversation.

Most of these data providers have been doing it for over a decade. They all get the API game and offer a wide range of API services for developers, providing access to data that is taken directly from, derived or enhanced from public sources. When it comes to building a business around public data, I don’t think these four have the exact model I’m looking for, but there are many lessons of how to do it right, and wrong.

Weather is age-old model

When you think about it, one of the original areas we built services around government data is weather. Weather data is a common reference you will experience when you hear any official talk about the potential of government data. There are numerous weather API services available that are doing very well when it comes to digesting public data and making it relevant to developers.

Weather is the most relevant API resource I know of in the space. Weather impacts everyone, making it a resource all web and mobile applications will need. With the growing concern around climate change, this model for using public data, and generating valuable APIs will only grow more important.

Time zone data

Right there behind weather, I would say that time and data information is something that impacts everyone. Time shapes our world and government sets the tone of the conversation when it comes to date and time data, something that is driving many API-driven business models.

What I like about time and date APIs is that they provide an essential ingredient in all application development. It is an example of how government can generate and guide data sources, while allowing the private sector to help manage vital services around this data, that developers will depend on for building apps.

Currency conversion

Along with time zone data, currency conversion is a simple, valuable, API driven service that is needed across our economy. You have to know what time it is in different time zones, and know what the conversion rate between different currencies to do business in the global API economy.

In our increasingly global, online world, currency conversion is only going to grow more important. Workforces will be spread across the globe, and paying employees, buying goods and services will increasingly span the globe, requiring seamless currency conversion in all applications.

Transit data

Another important area of APIs, that are increasingly impacting our everyday lives, are transit APIs, providing real-time bus, train, subway and other public transit data to developers.

Transit data will always be a tug of war between the public and private sector. Some data will be generated in each sphere, with some projects incentivized by the government, where the private sector is unwilling to go. Establishing clear models for public and private sector partnerships around transit data will be critical to society functioning.

Real estate

While I won’t be covering every example of building a business around public data in this story, I would be remiss if I didn’t talk about the real estate industry, one of the oldest businesses built on public data and facts.

I’m not a big fan of the real estate industry. One of my startups in the past was built around aggregating MLS data, and I can safely say that the real industry is one of the shadiest industries I know of that is built on top of public data. I don’t think this industry is a model that we should be following, but again, I do think there are a huge lessons to be learned from the space as we move forward building business models around public data.

That is as far as I’m going to go in exploring API driven businesses built on public data. My goal wasn’t meant to be comprehensive, I was just looking to answer some questions for myself around who else is playing in the space.

This list of businesses came out of my API monitoring system, so is somewhat limited in its focus, requiring the company who is building on top of public data to also have an API, which creates quite a blind spot for this research. However, this is a blind spot I’m willing to live in, because I think my view represents the good in the space, and where we should be headed.

Open Data 500

For the next edition of this story, I’d like to look through the 500 companies listed in the Open Data 500 project. I like the focus of the project from GovLab out of New York University.

Their description from the site sums up the project:

The Open Data 500 is the first comprehensive study of U.S. companies that use open government data to generate new business and develop new products and services. Open Data is free, public data that can be used to launch commercial and nonprofit ventures, do research, make data-driven decisions, and solve complex problems.

I see a few of the companies I’ve listed above in the Open Data 500. I’m stoked that they provide both a JSON and CSV version of the Open Data 500, making it much easier to process, and make sense of the companies listed. I’d like to make a JavaScript slideshow from the JSON file, and browse through the list of companies, adding my own set of tags—helping me better understand the good from the bad examples, as well as where the trends and opportunities are around developing APIs around public data.

I’m pretty convinced that we have a lot of work to do in making government machine-readable data at the federal, state, county and city level more available before we can fully realize the potential of the API economy.

Without high quality, real-time, valuable public data, we won’t be able to satisfy the needs of the next wave of web, single page, mobile and Internet of things application developers. I’m also hoping we can work to establish some healthy blueprints for developing private sector businesses and organization around public data, by reviewing some of the existing startups who are finding success with this model, and build on, or compliment this existing work, rather than re-invent the wheel.

Oakland vendor API requirement a big step for municipal open government

Oakland (Photo: Luke Fretwell)

Oakland (Photo: Luke Fretwell)

To get an idea of how badly Oakland needs to upgrade its digital infrastructure, you just need to read this one line from Tuesday’s city council staff report:

“Legistar 4.8 has not been upgraded since purchase in 1997 & has reached the limits”

Limits in this case being the massive limitations of the current technology to support better civic engagement and discussion and no ability for our community to access the critical data held in the legislative system in Oakland.

There are many big changes desperately needed in Oakland’s civic tech stack, and this one is long overdue. Our ancient legislation software was the reason Miguel Vargas and his crew struggled so hard to complete the build-out of our Councilmatic system, however with this upgrade, we’ll now use a similar system to other major cities which means both improved, user facing functionality, as well as a much easier deployment of a more robust Councilmatic that has been tailored for this version by folks in Philadelphia and Chicago.

We’ve been waiting for over two years, so it’s exciting that this finally gets approved by the Finance Committee. While the software upgrade itself is an important step for our city, more important was witnessing the ways our staff and elected officials have adapted their thinking about technology, data, code and procurement.

Two years ago there was nothing to brag about, not much to be proud of in Oakland’s use of technology and our lawmaking. Today, we saw a pivotal moment for our city.

It turns out that there is something in addition to the basic software the vendor, Granicus, can offer – an application programming interface – if you’re not a tech geek, this essentially means a robot (code, not real) that takes in requests form various people, programs, companies and dishes out the information requested in digital form.

In this case, the API is something Granicus has built, but has not made available to cities that have not required access to it – almost no one to date (New York City is just now struggling to get this sorted out and seems to be on the right track). 

Before approving the purchase, Councilmember Libby Schaaf asked the committee to require that Granicus provide an API as part of the contract requirements. No one in Oakland has ever unbundled the contracted software from the date before (aside form the unintentional effort with SeeClickFix that came with an API we didn’t need to request).

This means that Oakland gets a new legislative publishing and video streaming system, but we also get direct access to all the data in this system – machine-readable data that allows local hackers and engineers to build alert systems on specific issues and neighborhoods, custom tools to help people stay informed about what our government is doing and, well, anything you may want to do with full access to the data about our decision-making and public meeting track records.

After the meeting, I emailed LaTonda Simmons, our city clerk who is the manager of this whole system to thank her for moving this and making it possible to unlock this data. I was concerned the lack of specificity about the API being public would somehow bite us in the ass. I was wrong. 

Her response was encouraging. Folks in city hall are listening, and it turns out geeks can make a difference:

Hi Spike – I spoke to Granicus immediately after to Finance.  They reconfirmed they will turn on API. And yes, your feedback and that of many others will be important in this process.  More to come and thank you for your support also.  I must add that this wouldn’t have been possible without Bryan making it move.  Looking forward to the next CityCamp event.  Chat soon.

-= LaTonda S.

People in the city are really starting to get this stuff, and it’s going be awesome as it becomes the norm – less bundling of contracted software with built in, accessible open data.

Education Department wants your ideas on open data, APIs

Photo: U.S. Department of Education

Photo: U.S. Department of Education

The U.S. Department of Education has published a request for information asking for public feedback on how the agency can innovate with open data, particularly application programming interfaces.

“The Department wants to make sure to do this right,” writes Education Department Senior Policy Advisor David Soo on the agency’s blog announcing the RFI. “It must ensure the security and privacy of the data it collects or maintains, especially when the information of students and families is involved. Openness only works if privacy and security issues are fully considered and addressed. We encourage the field to provide comments that identify concerns and offer suggestions on ways to ensure privacy, safeguard student information, and maintain access to federal resources at no cost to the student.”

From the RFI:

This RFI seeks to explore potential ways in which the Department can expand on its successful efforts to increase and enhance access to information on higher education already published on the Department’s Web site, including through IPEDS, EDFacts, and other National Center for Education Statistics surveys related to higher education; data held by the Department’s Office of Federal Student Aid (FSA); and data available elsewhere in the Department that focuses on higher education. The Department also seeks feedback on options for read-only and read-write APIs that could increase access to and use of benefits, forms, and processes offered for programs authorized under title IV of the Higher Education Act of 1965, as amended (HEA), including in the submission of the Free Application for Federal Student Aid (FAFSA); enrollment in Income-Driven Repayment (IDR) programs; enrollment in the Public Service Loan Forgiveness program; participation in processes offered by title IV aid servicers in repaying Federal student loans; and use of loan counseling and financial literacy and awareness tools.

See the full RFI and submit feedback to APIRFI@ed.gov by June 2.

An API strategy for the U.S. government

U.S. Chief Technology Officer Todd Park with President Obama (Photo: White House)

U.S. Chief Technology Officer Todd Park with President Obama (Photo: White House)

I was asked to provides some thoughts on what is next for the U.S. government’s application programming interface strategy. I’ve put a lot of thought into it during my work and travels over the last couple months since I’ve left Washington, D.C., and I keep coming back to one thought: strengthen what we have.

I wish I had some new technology or platform for the next wave of government APIs that would ensure success with APIs in Washington, but in reality we need to do what we’ve been doing, but do it in scale, and get organized and collaborative about how we do it.

Release more data sets

There are thousands of data sets available via data.gov currently, across 176 agencies and numerous categories. We need more. When any content or data is published via a government website, that data needs to also be made available via agencies’ data repositories and data.gov. Agencies need to understand that releasing open data sets is not something you do every once in a while to meet a mandate or deadline–it is something to do always, forever.

Refine existing data sets

There is a lot of data available currently. However, much of it is in various formats, inconsistent data models and isn’t always immediately available for use in spreadsheets, applications and for analysis. There is a great deal of work to be done in cleaning, normalizing and refining of existing data, as well as deploying APIs around open data that would increase adoption and the chances it will be put to use.

Refine existing APIs

Like open data, there are many existing APIs across federal government, and these APIs could use a lot of work to make them more usable by developers. With a little elbow grease, existing APIs could be standardized by generating common API definitions like Swagger, API blueprint and RAML, which would help quantify all APIs, but also generate interactive documentation, code samples and provide valuable discovery tools for helping understand where interfaces are and what they offer. The mission up until now for agencies is to deploy APIs, and while this remains true, the need to evolve and refine APIs will go a long way towards building those valuable case studies to get to the next level.

Robust /developer areas

There are over 50 agencies who have successful launched a /developer area to support open data and API efforts. Much like open data and the APIs themselves, they represent a mishmash of approaches, and provide varied amounts of resources and necessary support materials. HowTo.gov already provides some great information on how to evolve an agency’s developer area, we just need some serious attention spent on helping each agency make it so. It doesn’t matter how valuable the open data or APIs are, if they are published without proper documentation, support and communication resources, they won’t be successful. Robust developer areas are essential to federal agencies finding success in their API initiatives.

Dedicated evangelist

Every successful API initiative in the private sector from Amazon to Twilio has employed evangelists to spread the word and engage developers, helping them find success in putting API resources to use. Each federal agency needs its own evangelist to help work with internal stakeholders making sure open data is published regularly, new APIs are deployed and existing resources are kept operational and up to date. Evangelists should have counterparts at OSTP / OMB / GSA levels providing feedback and guidance, as well as regular engagement with evangelists in the private sector. Evangelism is the glue that will hold things together across the agency, as well as provide the critical outreach to the private sector to increase adoption of government open data and APis.

Build public-private sector partnerships

Opening up data and APIs by the federal government is about sharing the load with the private sector and the public at large. Open data and APIs represents the resources the private sector will need to build applications, sites and fuel industry growth, and job creation. A new type of public-private sector partnership needs to be defined, allowing for companies and non-profit groups to access and use government services and resources in a self-service, scalable way. Companies should be able to build businesses around government Internet services, much like the ecosystem that has grown from the IRS e-File system, with applications like TurboTax that reaches millions of U.S. citizens and allows corporations to help government share the load while also generating necessary revenue.

Establish meaningful case studies

When it comes to open data and APIs nothing gets people on board more than solid examples of open data, APIs and the applications that are built on them have made in government. Open government proponents use weather data and GPS as solid examples of open data and technology impacting not just government, but also the private sector. We need to fold the IRS e-file ecosystem into this lineage, but also work towards establishing numerous other case studies we can showcase and tell stories about why open data and APIs are important–in ways that truly matter to everyone, not just tech folks.

Educate and tell stories within government

In order to take open data and APIs to the next level in government there needs to an organized and massive effort to educate people within government about the opportunities around open data and APis, as well as the pitfalls.

Regular efforts to educate people within government about the technology, business and politics of APIs needs to be scaled, weaving in relevant stories and case studies as they emerge around open data and APIs. Without regular, consistent education efforts and sharing of success stories across agencies, open data and APIs will never become part of our federal government DNA.

Inspire and tell stories outside government

As within government, the stories around government open data and APIs needs to be told outside the Washington echo chamber, educating citizens and companies about the availability of open data and APIs, and inspire them to take action by sharing successful stories of other uses of open data and APIs in development of applications.  

The potential of using popular platforms like Amazon and Twitter spread through the word of mouth amongst developer and power user communities, the same path needs to be taken with government data and api resources.

The next 2-3 years of the API strategy for the U.S. government will be about good old-fashioned hard work, collaboration and storytelling. We have blueprints for what agencies should be doing when it comes to opening up data, deploying APIs and enticing the private sector to innovate around government data, we just need to repeat and scale until with reach the next level.

How do we know when we’ve reached the next level? When the potential of APIs is understand across all agencies, and the public instinctively knows that you can go to any government /developer domain and find the resources they need, whether they are individual or a company looking to develop a business around government services.

The only way we will get there is by achieving a solid group of strong case studies of success in making change in government using open data and APIs. Think of the IRS e-file system, and how many citizens this ecosystem reaches, and the value generated through commercial partnerships with tax professionals. We need 10-25 of similar stories of how APIs impact people’s lives, strengthened the economy and has made government more efficient, before we consider getting to the next level.

Even with the housekeeping we have, what should be next for Data.gov?

Data.gov has continued to evolve, adding data sets, agencies and features. With recent, high profile stumbles like with Healthcare.gov, it can be easy to fall prey to historical stereotypes that government can’t deliver tech very well. While this may apply in some cases, I think we can get behind the movement that is occurring at Data.gov, with 176 agencies working to add 54,723 data sets in the last 12 months.

I feel pretty strongly that before we look towards the future of what the roadmap looks like for Data.gov, we need to spend a great deal of time refining and strengthening what we currently have available at Data.gov and across the numerous government agency developer areas. Even with these beliefs, I can’t help but think about what is needed for the next couple years of Data.gov.

Maybe I’m biased, but I think the next steps for Data.gov is to set sights purely on the API. How do we continue evolving Data.gov, and prepare to not just participate, but lead in the growing API economy?

Management tools for agencies

We need to invest in management tools for agencies, commercial providers as well as umbrella. Agencies need to be able to focus on the best quality data sets and API designs, and not have to worry about the more mundane necessities of API management like access, security, documentation and portal management. Agencies should have consistent analytics, helping them understand how their resources are being accessed and put to use. If OSTP, OMG, GSA and the public expect consistent results when it comes to open data and APIs from agencies, we need to make sure they have the right management tools.

Endpoint design tools for data sets

Agencies should be able to go from data set to API without much additional work. Tools should be made available for easily mounting published datasets, then allow non-developers to design API endpoints for easily querying, filtering, accessing and transforming datasets. While data download will still be the preferred path for many developers, making high value datasets available via APIs will increase the number of people who access, that may not have the time to deal with the overhead of downloads.

Common portal building blocks

When you look through each of the 50+ federal agency /developer areas you see 50+ different approaches to delivering the portals that developers will depend on to learn about, and integrate with each agencies APIs. A common set of building blocks is needed to help agencies standardize how they deliver the developer portals. Their approach might make sense within each agency, but as a consumer when you try to work across agencies it can be a confusing maze of API interactions.

Standard developer registration

As a developer I need to establish a separate relationship with each federal agency. This quickly becomes a barrier to entry, one that will run off even the most seasoned developers. We want to incentivize developers to use as many federal APIs as possible, and by providing them with a single point of registration, and a common credential that will work across agencies will stimulate integrations and adoptions.

Standard application keys

To accompany the standard developer registration, a standard approach to user and application keys is needed across federal agencies. As a user, I should be able to create a single application definition and receive API keys that will work across agencies. The amount of work required to develop my application and manage multiple API keys will prevent developers from adopting multiple federal agency APIs. Single registration and application keys will reduce the barriers to entry for the average developer when looking to build on top of federal API resources.

Developer tooling and analytics

When integrating with private sector APIs, developers enjoy a wide range of tools and analytics that assist them in discovering, integrating, managing and monitoring their applications integration with APIs. This is something that is very rare in integration with federal government APIs. Standard tooling and analytics for developers needs to become part of the standard operating procedures for federal agency /developer initiatives, helping developers be successful in all aspects of their usage of government open data and APIs.

Swagger, API Blueprint, RAML

All APIs produced in government should be described using on of the common API definitions formats that have emerged like Swagger, API Blueprint and RAML. These all provide a machine readable description of an API and its interface that can be used in discovery, interactive documentation, code libraries and SDKs and many other uses. Many private sector companies are doing this, and the federal should follow the lead.

Discoverability, portable interfaces and machine readable by default

As with open data, APIs need to be discoverable, portable and machine readable by default. Describing APIs in Swagger, API Blueprint and RAML will do this. Allowing APIs to be presented, distributed and reused in new ways. This will allow each agency to publish their own APIs, but aggregators can take machine readable definitions from each and publish in a single location. This approach will allow for meaningful interactions such as with budget APIs, allowing a single budget API site to exist, providing access to all federal agencies budget without having to go to each /developer area, but there are many more examples like this that will increase API usage and extend the value of government APIs.

Mashape, API Hub

A new breed of API directories have emerged. API portals like Mashape and API Hub don’t just provide a directory of APIs, they simplify API management for providers and integration for API consumers. Federal agencies need to make their APIs friendly to these API hubs, maintaining active profiles on all platforms and keeping each API listed within the directories and actively engaging consumers via the platforms. Federal agencies shouldn’t depend on developers coming to their /developer areas to engage with their APIs, agencies need to reach out where developers are already actively using APIs.

Consistent API interface definition models

Within the federal government each API is born within its own agencies silo. Very little sharing of interface designs and data models is shared across agencies, resulting in APIs that may do the same thing, but can potentially do it in radically different ways. Common APIs such as budget or facility directories should be using a common API interface design and underlying data model. Agencies need to share interface designs, and work together to make sure the best patterns across the federal government are used.

Webhooks

In the federal government, APIs are often a one-way street that allow developers to come and pull information. To increase the value of data and other API driven resources, and help reduce the load on agencies servers, APIs need to push data out to consumers, reducing the polling on APIs and making API integration much more real-time. Technologies such as the Webhook which allows API consumer to provide a web URL, in which agencies can push newly published data, changes and other real-time events to users, are being widely used to make APIs much more of a two-way street.

Hypermedia

As the world of web APIs evolve there are new approaches emerging to delivering the next generation of APIs, and hypermedia is one of these trends. Hypermedia brings a wealth of value, but most importantly it provides a common framework for APIs to communicate, and provide essential business logic and direction for developers, helping them better use APIs in line with AP provider goals. Hypermedia has the potential to not just make government assets and resources available, but ensure they are used in the best interest of each agency. Hypermedia is still getting traction in the private sector, but we are also seeing a handful of government groups take notice. Hypermedia holds a lot of potential for federal agencies, and the groundwork and education around this trend in APIs needs to begin.

Evangelists present

The first thing you notice when you engage with an government API is nobody is home. There is nobody to help you understand how it works, overcome obstacles when integrating. There is no face to the blog posts, the tweets or the forum replies. Federal government APIs have no personality. Evangelists are desperately needed to bring this human element to federal government APIs. All successful private sector APIs have an evangelist or an army of evangelists, spreading the word, supporting developers, and making things work. We need open data and API evangelists at every federal agency, lettings us know someone is home.

Read / Write

I know this is a scary one for government, but as I said above in the webhooks section—APIs need to be a two way street. There are proven ways to make APIs writeable without jeopardizing the integrity of API data. Allow for trusted access, let developers prove themselves. There is a lot of expertise, “citizen cycles,” and value available in the developer ecosystem. When a private sector company uses federal government data, improves on it, the government and the rest of the ecosystem should be able to benefit as well. The federal government needs to allow for both read and write on APIs—this will be the critical next step that makes government APIs a success.

These are just 14 of my recommendations for the next steps in the API strategy for the federal government. As I said earlier, none of this should be done without first strengthening what we all have already done in the federal government around open data and APIs. However, even though we need to play catch up on what’s already there, we can’t stop looking towards the future and understand what needs to be next.

None of these recommendations are bleeding edge or technology just for technology sake. This is about standardizing how APIs are designed, deployed and managed across the federal government, emulating what is already being proven to work in the private sector. If the federal government wants to add to the OpenData500, and establish those meaningful stories needed to deliver on the promise of open data and APIs, this is what’s needed. 

With the right leadership, education and evangelism, open data and APIs can become part of the DNA of our federal government. We have to put aside our purely techno-solutionism view and realize this is seriously hard work, with many pitfalls, challenges and that in reality it won’t happen overnight.

However, if we dedicate the resources needed, we can not just change how government works, making it machine readable by default, we can forever alter how the private and public sector works together.

Why no one uses your government data

Publishing government information is about much more than simply throwing 0’s and 1’s over the firewall. It’s about building ecosystems and communities. It’s about solving shared challenges. It’s about consumption — after all, that’s the American way.

Go to any agency website, and chances are you’ll find at least one dataset sitting idly by because the barrier to consume it is too damn high. It doesn’t matter how hard stakeholders had to fight to get the data out the door or how valuable the dataset is, it’s never going to become the next GPS or weather data.

We can build cruft to get around the hurdles, but so long as the workflow is optimized for the publisher, not the consumer, we’re always going to be setting ourselves up to give the naysayers even more fodder to point at as an example for why we shouldn’t waste agency resources on open government initiatives.

There was an interesting Hacker News comment that caught my eye around the release of a Bureau of Labor and Statistics API. User @nmcfarlwrote:

An interesting micro-site – it seems very much more aimed at people who know BLS data already, and much less aimed at coders. The FAQ answers the question “What is an API?”, but doesn’t even have a list of the IDs required in order to get any data whatsoever out of that API.

This struck me for two reasons. First, it shows that the data publishers still believe that key stakeholders within government, do not yet understandthe concept of or value in producing an API, a chilling thought considering the critical role open government data can, should, and is playing in shaping our economy and reimagining how citizens interact with their government.

Second, the fact that the documentation is geared not towards developers, but towards internal stakeholders shows a lack of consideration for the data consumer’s user experience. In this case, listing ID’s is just scratching the surface of what a developer might expect from the private sector. Is the API responsive? Does it return data in a usable format? Are related queries properly linked? The list goes on.

All experiences — government experience especially — should be optimized for the consumer, not for the publisher, regardless of format or internal politics.

The General Services Administration, much to their credit, has begun a “First Fridays” usability testing program, which is the tangible result of a much larger and much needed effort to educate publishers of government information as to the importance of and necessity for a purposeful commitment to user experience.

But as we increasingly push for government information to be published in machine-readable formats, what becomes of that focus on the consumer’s user experience? Is the user experience format agnostic? Here, the immediate consumer of the information is a civic hacker or other developer, and the interface is an API rather than an agency website, but the intention and importance is just the same.

The next time you publish a data source, whether in government or out, just as you should with any website, ask yourself one simple question: How can you optimize the experience for the consumer?

Continuing San Francisco’s national leadership on open data

Port of San Francisco (Photo: Luke Fretwell)

Port of San Francisco (Photo: Luke Fretwell)

Today, the San Francisco Board of Supervisors will take its final vote to approve my update to our city’s groundbreaking open data law. My open data ordinance, in its simplest terms, standardizes and sets timelines for the release of appropriate city government data.

I know that my update to our open data ordinance will help lead to further innovation and technologically driven services, solutions, platforms, and applications for civic issues and problems. Technology is not going to be the cure-all for every problem government faces, but it can certainly help to improve our resident’s quality of life in certain instances, while continuing to boost our local economy at the same time.

All across the nation, cities, counties, states, and even the federal government have and continue to take steps towards making appropriate government data available because open data has proven to spark innovation, drive greater efficiency and cost-savings in government, and fuel further economic development – as evidenced in the recent and steady growth in the civic startup sector.

My law modifies and standardizes the city’s open data standards, ensuring for data released in machine-readable formats; sets timelines for city departments for the release of appropriate city data sets; creates a mechanism for city staff and agencies to interact with the public and entrepreneur community for the prioritization of releasing city data sets; and makes San Francisco the first city in the nation to be tasked with developing a strategy to give residents access to their own government held data.

There are examples here in San Francisco and nationally that show open data used in practice, and how open data can help accomplish all of the positive benefits mentioned above. Whether, it is Yelp’s recent partnership with the city to post public health scores to their website for city restaurants to help residents make healthier choices, to residents being able to use the acclaimed San Francisco Recreation and Parks App, which helps residents and visitors find park and recreation locations, make picnic table reservations, and allows for tickets for concerts, art exhibits, and other events to be purchased straight from a mobile device.

The standardization of the city’s technical open data standards, which ensures that data will be available for use in machine readable format that are non-proprietary, is key to unlocking the true potential and value of appropriate data sets that the City holds. A recent report from Mckinsey&Company states that open data can help unlock $3 trillion to $5 trillion in economic value worldwide annually across seven distinct sectors. A new economy with this great of potential is something that should not be ignored.

My ordinance also creates tighter deadlines for city departments to follow in the release and update of appropriate government data. Tighter deadlines regarding the release of open data sets creates certainty that will be extremely beneficial to the public and entrepreneur community. With more certainty, entrepreneurs and the public will be able to better plan around their individual ideas and implementations of our city’s open data sets that will be the base of the next product, service, or application that helps to benefit all San Franciscans.

The inclusion of timelines regarding the release of appropriate government data sets was not an arbitrary decision. It was a decision based in practice and from testimony we heard from the public and entrepreneur community. Yo Yoshida, CEO and Co-founder of Appallicious, was even quoted as saying “We look forward to putting some teeth into the open-data movement through this legislation. We do have some snafus with some departments not being able to release it quick enough to give the developers the ability to create products from this and create industry and jobs and move the movement forward.”

My ordinance also creates a better mechanism for the public and entrepreneur community to interact with city staff and departments who will be responsible for cataloging, updating, and uploading appropriate government data sets. By mandating each data set have the contact information of the staff that uploaded the data (phone and email) associated with the data set, any interactions between the two parties will be sure to spark creativity and discussion regarding potential high value data sets, so that the next amazing products and services will just be on the horizon.

Lastly, my ordinance would make San Francisco the first city in the nation to develop a strategy for giving residents access to their own government held data. The addition of this requirement in my ordinance believes in a growing national movement that is calling on all levels of government to give residents access to their own data for their own use. If it is yours, we should give it back to you – simple as that.

San Francisco Magazine called my ordinance the “Super-Boring City Law That Could Be Huge.” I guess open data may be boring to some, but I would tell you it depends on who you ask. It is definitely not boring when the transformative potential of open data is known to increase government efficiency and accountability, fuel further economic development, and create an atmosphere that encourages innovation, discovery and growth.