Following several recent strategic hires, wins at FDA and continuing success with the IRS, USCIS, USDA, and other agencies, OS AI sat down with Brillient’s Chief Digital Officer, Richard Jacik, 2023 Pinnacle Award Artificial Intelligence Executive of the Year, to learn more about how his company is meeting the challenge head on, and the data and governance challenges for AI. Richard also shares advice for Government on weeding out those who do vs say they do, and for industry on what is needed next for AI.
Meeting the Challenge Head on
Delivering AI and applying it to benefit for its customers for nearly a decade, Brillient supercharged its focus and expertise several years ago, investing in technology leadership, creating ALICE Labs, increasing investment in R&D, and formalizing its suite of accelerators by defining and deploying the ALICE ecosystem of intelligent automation tools that enable transformative solutions harnessing the next level of federated and collaborative AI.
“Some of the biggest challenges early on were really more about a failure of imagination. Many customers were resigned to solutions that met their preconceived notion about the benefits that could be delivered by tech – what they thought was the art of the possible. There were notions about where AI and ML fit in, but thinking abstractly about needle-moving improvements in how citizens and constituents might be served was hard. It wasn’t always easy to imagine the link between automation and service quality or the citizen-government relationship.”
Imaginations are now running wild. Over the past year or so, with more advances in AI and more everyday use cases, eyes and minds are opened to new potentials. “We’re now in the business of translational AI, that is, the near-immediate re-application, and enhancement, of intelligent solutions that were developed for one agency, to the benefit of other agencies that have roughly analogous needs.”
The Data Challenge
Understanding that language models (LM)s are fueled by and improved by massive amounts of data, government is uniquely positioned to support the development of LM capability, once it clears the hurdles of making that data accessible through digital systems. “All machine learning is powered by data; language models doubly so. Few organizations have as much data at their disposal as a government agency but unfortunately 95% of that data isn’t readily digitally accessible. It’s analog, images, sound, data streams, sensor input, and a million other things; and even if the data is born and stored digitally, much of that is what I call edge-digital. It’s ones and zeros but it might as well be analog because until it is accessible, understood, interpreted, combined with, and understood within the context of other data, it’s just not adding a lot of value. The real power of AI comes from its ability to process and semantically analyze the rest of that data. That challenge is one of the reasons major portions of our ecosystem are dedicated to finding, learning, understanding, and making sense of the entire fabric of information, not solely the pieces that are already digitally managed.”
This data, whether on microfiche, in data lakes, encompassed in email, or sitting on a thumb drive in someone’s desk or file cabinet, has to be brought forward and positioned so it can be used. Enter AI. “Applying AI is a matching problem, not a ranking problem. There is no one AI technology to best accomplish all tasks; and there never will be. The future, and our present, is about building collaborative, coordinated, and federated AI working together as a system of AI systems. This meta-AI requires a policy-driven architecture designed to optimize the work of dozens or hundreds of different intelligent components combined with more traditional compute — applying the AI that makes sense based on the type of data and the needed result.”
Lens of Governance
Part of ensuring unbiased data is committing to governance goals from the start, and making plans explainable, traceable and ensuring they eliminate potential biases towards certain kinds of data over others. “This is one of the reasons you’ll see a lot of chief AI officers are either former chief data officers or they’re building governance models very closely aligned with the data management organizations within government.”
While data governance used to focus largely on what data could be seen and by whom, AI adds a layer of complexity because they are adept at drawing summaries or making decisions or doing corollaries or doing regressions against raw sets of data and can see more perhaps than the individual user.
“Virtually anyone can set up their own AI to look at whatever data they are allowed to work with, and it gives them a bit of insight and knowledge that is useful. That sort of garage-band AI has its place as a personal productivity helper. Scaling that up to the enterprise is a different story. It now must address much more complex data access, data inference, and data training environments and concerns. It needs to apply formal methods, formal frameworks, and the pipelines to make it transparent, explainable, and repeatable.”
Weeding out Those who Say but Don’t Do
Look at any random assortment of IT company websites and many will claim AI as a capability. Often though they lack the perspective and experience to understand the high levels of diligence required to develop more than a one-off solution.
“I’ve been doing AI and advanced technology long enough that it surprises some to learn that I look at emerging AI suspiciously long before I get excited about it. I wear my skeptic stripes proudly, and that carries forward to the AI we develop, deploy, and integrate. I want to have every potential question answered, and pitfall addressed, before we recommend these technologies. We do that so that our clients won’t have to.”
“Brillient also excels at getting AI prototypes out quickly and making them available in ALICE’s Playground so they can be tried and tested, compared and validated so that the best solution out of a number of options might be presented.”
How does one determine whether someone has capabilities in AI or is just imagining what they might do? The difference, says Jacik, is the term ’In production’. “There are plenty of solutions that may work in very narrow use cases or in demo environments but the question to ask is, has this been deployed? Is someone using this and how? Is this in front of customers and constituents and being used to benefit citizens? Has it received ATO?
Partner Opportunities
Working with integration partners whose projects were not initially envisioned to be AI or data fabric enabled, with others whose IT modernization efforts are shifting to digital transformation, Brillient supports that greater set of objectives within an existing infrastructure.
“Consider the organization that has 1000 off the shelf pieces of software that are running and maybe 1000 bespoke applications that are doing things for which there is no off-the-shelf equivalent. To provide a transformative capability, you can either try to address those one at a time, or you can build frameworks and services and middleware that make it possible for any of those applications to get advanced capabilities, including AI, as soon as they are ready. It saves money, and time, and enables organic transformations not based on ripping & replacing. For many in-flight projects we come in and become the intelligent fabric that sits between them and makes available a lot of new and cool capabilities.”
Leveraging an active partner and channel function that prioritizes potential partners based on joint opportunities and white space and places where others have intimacy and Brillient perhaps does not, the company is always open to new names, faces and opportunities to collaborate.
What’s Next
When asked what a next year interview might focus on, Jacik says he places hopes on causal AI over LMs. “Causal AI is a hot topic, and one were working with researchers on, that provide that next level of explainability and transparency. More than an inference that will predict one action based on a series of others, the next need is the why. It isn’t about just what the recommendation is, but why that is the recommendation. This is true in order to both be able to recreate, support, and defend the decision as well as to help the human component better understand their motivation and drive.”
“AI is an arc. What we have right now are humans trying to contort themselves into that arc. The fact that there is a special domain for prompt engineering is a perfect example of that. If you don’t get the answer you want, it’s because you asked the question the wrong way. What we need next is to bend that arc to meet the human rather than the other way around.”
Final Thoughts
Jacik says it’s OK to be excited about AI, but we have to be diligent about applying it correctly and making sure that it’s more beneficial than problematic. “Take the excitement and temper it with frameworks and processes and tools and understand this is growing into a collaborative, coordinated AI environment, and take the time to build the frameworks and the governance around that.”
Not Yet a Premium Partner/Sponsor? Learn more about the OS AI Premium Corporate and Individual Plans here. Plans start at $250 annually.