Woman in front of screen by Aly Song, Reuters
Large Language Models (LLMs) have been in the news a lot lately. Especially prominent is the buzz around ChatGPT built on top of OpenAI's GPT-3.5 and GPT-4 families of large language models. It shows up in conversations along with something called CoPilot. Generative AI is everywhere, and it’s not looking like it’s going away any time soon.
Can LLMs transform businesses? As many claim, the landscape changed dramatically with the release of ChatGPT in Nov. 2022. Human-computer interactions via AI went from prescriptive to collaborative overnight. The shift centers around a new interface paradigm. No longer tied to the physical elements of screen, keyboard, mouse, humans can now use natural language to order their machines around. This currently includes voice, gestures, facial expressions which augment the traditional written text input. We talk, using words to explain what we mean, and it talks back to us.
Collaboration implies that humans do part of the tasks, AI does others, and the back and forth of it all feels more ‘natural’ with a better result at the end of it all. As a newly emerging field, with hundreds of papers published, it is all too easy to go out on LinkedIn and other forums to find jobs such as ‘prompt engineer’ rising in the ranks of skills demand. But what does this mean for proactive, independent-minded workers in the field? The trend is to view these tools and bots as augmentation to existing efforts. While Ironman's JARVIS is a long way from the old paradigm of Jeeves (as in AskJeeves of the Internet 1.0), generative AI is far from independent, able to have agency and purpose without human input.
So, what about CoPilot, that much-touted ‘assistant’ for coders? Can it be the butler of techie dreams? CoPilot is essentially an integrated development environment (IDE) that assists programmers by autocompleting their code for them. IDEs have been around for a long time, going back to the 1990s. What’s unique about this tool, codeveloped by GitHub and OpenAI, is the depth of features that allow for the AI assistive plugins to almost write the code without human input. It can take the code a programmer inputs, describe it in English (in other words, generate documentation), and then translate your code into a variety of other programming languages. Pretty neat, huh?
But CoPilot does not act with agency, running autonomously with its own free will. It still needs human input. Is AI going to replace human workers any time soon? No. In the future? Probably, but only partially. And as with any technological advance, new jobs will emerge out of the chaos.
Generative AI is great for certain tasks: Proofreading, summarization of text, better decision-making based on data, and so forth. Cautionary folks insist that GPTs such as LlaMa, Codex, and Bard, along with ChatGPT, be adopted in a responsible manner. This implies always keeping a human in the loop, in design and development, as well as quality control.
Governance and Ethics
The government, as usual, is behind the curve in regulating the industry. There are no clear ways to apply old paradigms enshrined in law to new ways of viewing the world. Here are some examples:
Copyright: If an AI generates new content, who owns the output? In the case where that AI uses content scraped from the open web, and regurgitates language that mirrors its input sources, has the copyright of the original writer been violated? Who to sue?
Section 230 and Fair Use: In regard to the language model, did the company (such as Microsoft, Google, etc…) obtain permission to include the content in their training set? Or was that content out on the web in databases and locations that are operating under GNU or open-source licensing? Did the usage go beyond an allowable ‘fair-use quote’ to wholesale plagiarism, triggering the first issue of Copyright?
Regulatory Controls: Agencies such as the FEC and SEC are designed to monitor and force compliance of human beings to laws regarding monetary policy. If an AI agent acts contrary to the rules, who do you punish? The programmers who created the agent or bot? The company that deployed the AI? The law of unintended consequences, Murphy’s Law, always raises its head.
A case pending before the Supreme Court addresses YouTube’s use of content to generate video recommendations for users. Does the liability rest in the organization’s ability to shape content via recommendation engines and the algorithms that power them? This case is relevant to the debate, as ChatGPT and its brethren operate according to the same principles as those recommendation engines. In other words, do Section 230 protections generally apply to any third-party content from users, to content or information created by a company out of that third-party data? Does it protect companies from the consequences of their own products? More importantly, who ‘owns’ the output of the engines? In Thaler v. Vidal, the courts maintained that US Patent Law requires a human inventor.
Many institutions and groups are organizing around ‘Ethical AI’ to lay the groundwork for a public policy debate that surely must come. Groups such as Oxford University, the Institute for Ethical AI and ML (UK), Stanford University, AI for Good (UN), and Global IA Ethics Institute (Paris) are all attempting to lead the way. There are many more out there offering classes, certificates, and services such as audits and systemic quality reviews. It’s early days.
At AskRadar, we are committed to the ethical use of technology, including AI. Our values are centered around doing good with the technology and tools we develop for our clients. Active Knowledge Engagement is the key to keeping humans in the loop.
For more information about how we use models, agents, and bots, or for details about Privacy, contact Legal@AskRadar.ai.
If you’d like to learn more about AskRadar’s Knowledge Engagement services, contact us at Sales@AskRadar.ai.
About AskRadar.ai
We believe that people are the key to solving complex problems.
With pinpoint accuracy, Radar connects you with the right expert, right now, to answer complex internal questions, because complex problems don’t get solved with chatbot answers or crowdsourced chaos.
Radar creates intelligent connections through a combination of computational linguistics, A.I. models, and human intelligence. The result is increased productivity and accelerated operational velocity, with drastically reduced interruptions from those Slack attacks and email blasts. And, when a question has been asked more than once, Radar serves up the most recent relevant expert answer, getting rid of fruitless searches for information.
Radar’s Dynamic Brain learns from every interaction, ingesting conversational data, and gets smarter every day.
Schedule a demo today >> https://meetings.hubspot.com/nilsbunde/radarai
About the Author
Sharon Bolding is the CTO of AskRadar.ai, an A.I.-powered Enterprise SaaS company. She is a serial entrepreneur, with experience in SaaS, FinTech, CyberSecurity, and AI. With two successful exits of her own, she is a trusted advisor to startups and growing companies. An invited speaker at universities and tech conferences, she focuses on educating users about the ethical use of their data and how AI impacts privacy and security.
Comments