How Agencies Can Start Sharing and Acting on Data to Better Serve Constituents Now

How Agencies Can Start Sharing and Acting on Data to Better Serve Constituents Now

“Every problem has in it the seeds of its own solution,” said the paragon of positive thinking Norman Vincent Peale. It’s an insight that holds value for organizations throughout government.

Government organizations, from federal to state to local, have a data problem. It’s not that they don’t have enough of it. In fact, agencies are awash in data, and more is streaming in every minute.

The problem is that very often, agencies aren’t able to get at the right data, or effectively share data with other organizations, or make smart decisions based on the insights they know lie hidden in their disconnected data stores.

The good news is that agency data already contains the seeds of the solution, which organizations can germinate in three ways: by establishing a data trust, by creating an infrastructure for data integration, and by taking advantage of new tools for data analysis and visualization. And by taking these actions, agencies can make tangible progress toward serving citizens more effectively.

Establishing a Data Trust

The new administration has been emphasizing the need to leverage data to solve today’s biggest challenges, from Covid-19 to infrastructure to climate change. Its Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidenced-based Policymaking calls on organizations to commit to making data-driven decisions guided by the best available information.

One way agencies can make good on that promise is to create a data trust. In a legal context, a trust is an arrangement under which a trustee holds property for the good of its beneficiaries. Similarly, a government data trust is a formal agreement among multiple agencies on the sharing of information for the good of all. The trust establishes overarching rules for which datasets will be shared, the ways they’ll be used, and the purpose that usage will serve.

Creating an Infrastructure for Data Integration

A data trust gives agencies a high-level framework for sharing information. But organizations still need to do the hard work of integrating that data to make it available and useful. And for that they need an infrastructure – both a process infrastructure and a technology infrastructure.

Within each agency, an infrastructure for data integration begins with leadership, especially CIOs and CDOs. While the White House memorandum calls on agency heads to make data available and understandable, CIOs and CDOs are the leaders who will need to take the practical actions to make that happen.

While the commitment to evidence-based policymaking is flowing from federal government, state and local governments are leading the way in actual data integration. The Commonwealth of Virginia, for example, has created an innovative Framework for Addiction Analysis and Community Transformation (FAACT) to fight the opioid epidemic. The initiative shares data among the department of criminal justice, state and local police, private healthcare systems and other organizations. Voyatek (then operating as Qlarion) developed the FAACT platform, which has been extended to help the Commonwealth combat Covid-19.

Taking Advantage of Technology for Data Analysis and Visualization

Finally, organizations need to leverage the latest technology that makes data easier to aggregate, analyze and visualize. In some cases, this effort will require investment in modern, cloud-based IT landscapes. But there’s low-hanging fruit that can pay dividends in the short term.

For example, open APIs allow applications to access the data of other applications and services without the need for costly integration projects and rewriting of code. As more agencies deploy cloud-native applications, APIs will become increasingly viable – and valuable to data sharing.

Likewise, agile development methodologies can speed incremental improvements to data analytics capabilities. For example, continuous integration (CI) automates how development teams create and test applications, enabling them to make code changes frequently and resolve issues quickly.

Longer term, intelligent technologies such as machine learning (ML) and other forms of artificial intelligence (AI) can automatically troll a sea of data to surface previously unrecognized patterns and connections. Predictive analytics can empower organizations to address emerging issues that in past might have caught agencies by surprise. And visualization tools can enable all agency teams – not just data scientists – to understand and then act on available information.

Data is only as useful as the insights it offers and the informed decisions it drives. So data itself won’t solve agency problems – or enable agencies to solve the problems of citizens. But government data does contain the seeds of the solutions. It’s up to organizations to invest in the mindsets, initiatives, and tools that can transform that data into new understandings and actions that truly benefit citizens.