Entire Businesses Can Be Run by AI Without Human Intervention: Why We Need to Talk About This (+ Convert’s Stance)
To kickstart this piece, let’s establish a fact: We aren’t AI usage or development experts. And we do not wish to masquerade as such.
Instead, we’re observers, just like you, trying to make sense of an exponential advancement in technology.
There’s been a ton (conservative estimate) of hype around AI since late 2022. You can’t turn on the TV today without hearing AI this or that in 10 minutes or less.
But let’s press pause for a minute. Usher in the silence. What exactly is going on?
AI is way more accessible these days, thanks to OpenAI’s ChatGPT and the thousands of spinoffs it either inspired or powered, or both. People are going down this AI rabbit hole, throwing caution to the wind.
From ‘experts’ and ‘connoisseurs’ to laymen, AI has trickled down to consumers who may not realize the implications of using AI-powered tools, at an unprecedented rate. Even children who can use smartphones now routinely come in contact with AI.
But AI is not merely another technology. The idea behind it (at least in the sense in which it’s being hyped) is to move past building something new based on existing human thinking (generative AI).
Instead, the idea is to outsource our thinking and decision-making to AI-powered interfaces.
This idea is tantalizing. It means less ‘brain calories’ to burn. To think we can shirk all our responsibilities—and apparently, it’s marvelously easy to do so—is tempting.
As such, AI is being trained by people to usurp the human-ness of people. This is practically unavoidable since the potential market AI can capture is almost the entire breadth of humanity.
This includes folks who for various reasons do not possess the knowledge and discernment needed to make careful use of such a powerful tool. Folks who will be using the tool with impunity regardless.
As you can imagine, this comes with its own devious set of challenges.
So, instead of adding to the noise—blindly advocating for AI or vehemently campaigning against it—we, at Convert, want to foster a conversation around AI and its ethical use in business. We’re asking questions, learning, and making informed decisions about a more mindful approach to AI’s adoption.
And we invite you to do the same.
The World’s First AI Run Business: ChatGPT as CEO
A Portuguese startup, AIsthetic Apparel, sells T-shirts with AI-generated designs by Midjourney. ChatGPT, the appointed CEO, created the name, logo, business plan, and marketing strategy for the company. The founder, João F. Santos, acts as an assistant and follows the AI’s instructions.
Check this: With ChatGPT’s leadership, this apparel company raised $2,500 from angel investors and made a profit of €7,000 in the first week. ChatGPT projects an annual profit of €40,000 and a valuation of €4 million for the company.
This is a true story from just a couple of months ago.
A thought-provoking experiment about AI-driven business management and innovation. But also one that’s in the very infancy of its potential.
AI-run businesses will be incredibly efficient, using fewer resources and analyzing vast amounts of data to make faster, and (even) better decisions. With these perks and more, a scenario where entire businesses are run by AI is not a pipe dream. It is something that has already arrived.
But the question is: Should entire businesses be run by AI?
This will be a recurring theme moving forward. People who are bedazzled by the promise of Artificial Intelligence’s humming power need a reality check.
And this reality check can often be as simple as repeating the statement: Should I/we do this?
Just because you can, doesn’t mean you should!
Let’s break this down.
Over the last three decades, with the advent of the internet and the creator and Intellectual Property (IP) economies, the baseline of who can go viral, who can influence, and what can shape culture and leave an impact has shifted from the privileged few to almost anyone with the will to execute on an idea.
This is empowering.
But when you add Artificial Intelligence to the mix, the playground opens up to include every person on the face of the planet. With good intentions, or bad.
Why is this a problem?
This is a problem because Artificial Intelligence changes the quantum of possibility. It is creating a new world where hitherto what was outside the realm of yes, is now very much in the realm of feasibility (and with a low entry barrier).
New worlds demand revised codes of morality.
The AI-driven world doesn’t have its own code of ethics.
In the absence of ingrained guiding principles, “Just because I can, doesn’t mean I should” needs to be an ongoing mantra; the rhythm to which morality trails its blaze.
Consider this example:
We are huge fans of the way Eden Bidani composes her copy.
Her words are threaded together uniquely. You can spot her distinctive style a mile away.
We may gather pieces of content Eden has penned, train ChatGPT or any other open AI model with it, and proceed to create copy for the Convert website using Eden’s style and cadence.
Whether ChatGPT or the AI models of today can pull through with something that indeed matches human Eden’s prowess is something to debate later. That is a moot point, given the fact that technology is ever-progressive, and artificial intelligence in particular is breaking barriers at lightning speed!
The question is: Us appropriating Eden’s style, should we even attempt it?
A professional who has spent years honing her craft. She flavors her copy with her own life experiences — which according to Joseph Sugarman is one of the key pillars of the trifecta holding up excellent copy.
Any attempt to duplicate her work will not only be plagiarism of a kind that did not exist up until a year ago, but it is also a great disservice to the human aspect of her work.
Our apprehension is that not many businesses will pause to consider these ramifications.
Because ChatGPT can act as a business leader doesn’t mean it should be one. Leaders make decisions that affect people’s lives, such as hiring, firing, and even sentencing. Should we trust AI with those decisions without human intervention?
Many of us have witnessed AI being unintentionally biased.
Here’s another example from the time Amazon tried to use AI to find the best candidates for its tech jobs. It turned out the AI was biased against women.
The AI learned from past resumes that most applicants were men and favored those who had male-related activities or skills. The AI also ignored women who had impressive achievements or qualifications. Amazon could not fix the AI’s gender bias and decided to stop using it.
Or the time a French company tried to use OpenAI’s GPT-3 system to create a medical chatbot for doctors. This chatbot suggested that a suicidal patient should go ahead with the deed. The company opined that GPT-3 could only be used for fun or relaxation for doctors, but not for any serious medical purposes.
On the topic of whether AI should run without human intervention, the list goes on:
- When AI is used to control or operate physical systems, such as robots, weapons, vehicles, or infrastructure. What if they malfunction, get hacked, or act unpredictably in these high risk-high impact situations? Remember Murphy’s law.
- When AI is used to interact with humans, such as chatbots, virtual assistants, or social robots. Can we trust it to interpret human emotions, intentions, or preferences correctly? Even more critically, can we trust it not to manipulate people for commercial or malicious purposes?
- When AI is used to learn from data or feedback, such as reinforcement learning or self-learning systems. AI systems can improve their performance and adapt to new situations by learning from their own experiences, but they may also deviate from their intended goals, acquire undesirable behaviors, or cause unintended consequences. Remember when Microsoft’s Tay became a “sexist, racist monster”?
So, while AI-run businesses may have their appeal, there are obvious drawbacks, challenges, and risks worth thinking about seriously.
A more balanced approach would be
Artificial (Machines) + Intelligence (Humans)
This way businesses can leverage the strengths of both humans and machines without falling into their pitfalls. While human beings can outsource their tedium to AI, we shouldn’t outsource our ethics, strategic decision-making, or morals.
Artificial Intelligence & Regulations: The Current Scenario
Recently, Sam Altman, CEO of OpenAI, testified before members of a Senate subcommittee in the United States and agreed with them that AI needs to be regulated.
Governments around the world are recognizing the impact of AI and are enacting legislation to guide its development and application.
The Brazilian Artificial Intelligence Bill, approved by Brazil’s House of Representatives and now under analysis by the Federal Senate, is a prime example.
In Europe, the EU AI Act serves as a signal to the rest of the world of the appetite for technological innovation within a government.
One thing these legislations have in common is that they acknowledge that we are past the point of no return. AI is here to stay for good, and the focus is now on ensuring that its development aligns with principles such as human dignity, privacy, non-discrimination, transparency, and ethical use.
But this is easier said than done. Even more so in the world of artificial intelligence development that’s changing too fast.
For example, the three static categories of risk: unacceptable risk, very high risk, and low risk may change frequently, almost within days, because of how quickly new tools and AI-enabled concepts hit the market.
Enter Elon Musk, Steve Wozniak, and others’ open letter to pause giant AI experiments.
It puts the onus on AI developers and governments to control the pace of upgrades. It says “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable”, ultimately calling for a conscious choice not to train more powerful models until regulations are in place to identify and mitigate future risks.
If developers and governments put a pause on AI development until confidence is established, does that end further AI training? In other words, can AI train itself without human intervention?
Yes. Just with access to data, AI can search for patterns and teach itself to “fill in the blanks”. That’s called self-learning AI. It is unsupervised learning.
“Unsupervised” adds more dried twigs to the flames of nerves. And rightfully so because you can’t tell if what the AI is learning is helpful to humans or worse, leans more towards an anti-human agenda.
However, in those high-risk categories where there’s a legitimate cause for alarm, the EU AI Act calls for a centralized database of sites and technologies that must be subjected to conformity assessments and mandatory post-market performance logging.
Because at all points in time, the benefits should drastically outweigh the risks. Don’t you agree? Although “drastically” here (similar to risks) needs to be quantified.
Without that exact quantification, you can still position your organization on the right side of AI use compliance. The safe path is transparent and non-discriminatory, protects privacy, and preserves human dignity.
Some of this is already reflected in Brazil’s stance. For instance, it includes upskilling the workforce to prepare for the AI disruption. This is notably missing in the current version of the EU AI Act. You can see how different approaches manage the societal impact of AI.
It is important to note that the EU AI Act is still under negotiation, with the aim to reach an agreement by the end of 2023. Let’s see what changes.
Concerned groups and organizations like the Future of Life Institute have issued recommendations to bolster the EU AI Act, including better risk classification, especially with the advent of multi-purpose AI like ChatGPT. Some aspects of it may constitute high risk, while others don’t. It all depends on the use case.
They also call for better protection for AI whistleblowers and “sandbox environments”.
In the first, we all get clear ways in which humans who experience AI manipulation can report it, and seek redressal. In the second, there’s a safe space for businesses working in AI innovation to test out the technology without fear of repercussions.
You don’t want to stifle progress with more caution than is needed. At the same time, you want to protect humanity.
How Can Businesses Make the Most Efficient & Ethical Use of AI?
Right to the point, here’s how you may be able to achieve ethical AI use while still maximizing efficiency with a multipronged approach:
- Stay up-to-date with the latest developments in AI regulation and legislation. So you can constantly assess your posture relative to government expectations regarding the use of AI.
- Develop and adhere to clear ethical guidelines that align with both legal requirements and societal values. This includes principles related to transparency, non-discrimination, privacy, and human dignity.
- Conduct regular risk assessments to identify and evaluate the potential risks associated with AI applications in your business. You must understand the categories of risk as defined by regulations and assess how they may change over time.
- Cultivate a culture within the organization that values responsible AI development and use. Encourage employees at all levels to consider the ethical implications of their work and to act by regulations and societal values.
- Establish a dedicated team responsible for monitoring compliance with AI regulations. This team should work closely with legal, technical, and operational departments to ensure alignment with regulatory requirements.
- Ensure that AI development and deployment prioritize human well-being and job fulfillment. This may include policies that favor human-AI collaboration over full automation in areas where human judgment and empathy are crucial.
- Provide training and resources to employees to understand the ethical implications of AI and the relevant regulations. Prepare your workforce for potential automation and the shift in job roles.
In the experimentation space, the brightest minds have one or two ideas to share with you about merging artificial intelligence with A/B testing:
Karl Gilis, Co-Founder of AGConsult and David Mannheim, Founder of Made With Intent
Craig Sullivan, Optimiser in Chief at Optimal Visit
This transformative power of AI is already getting implemented by CROs all over the world. When we asked experimenters how they’re currently using AI in their workflow, they told us they’re using:
- ChatGPT to brainstorm content ideas, generate copy, and summarize user feedback
- NLP and ChatGPT to categorize and theme user research text documents
- Bandit algorithms and reinforcement learning to adapt experiments based on data
- Evolutionary algorithms to test creative combinations on Facebook ads
- Clustering algorithms to identify common characteristics among survey respondents
Generative AI may also be used to improve ideation in product development and experimentation:
So, now that you know all these, what’s the acceptable way forward? Of course, it’s important to embrace AI in your organization responsibly and ethically. Here’s how you can start using artificial intelligence today:
- Improve Efficiency in Experimentation: Run more tests and improve test velocity. You can feed AI the necessary data for your app or website and it’ll come up with a lot of test ideas and hypotheses.
- Adopt a “People First” Approach: Remember that tool productivity is only as good as the skill of the people wielding it. Ensure that your team is adequately trained to make the most of AI. They’re the ones who run AI, AI won’t run itself… yet.
- Automate Basic Tasks: Allow AI to handle basic or “busy work,” freeing up human resources to focus on strategic thinking and tasks that require irreplaceable human intervention, ethics, and emotions.
- Simplify Processes Without Surrendering Freedom: Use AI to simplify processes, but don’t surrender the ability to think, drive change, influence, and do good in the world. Preserve the virtues and qualities that are the essence of humanity.
Matrix and Terminator: Possible?
We’ve already downloaded all facets of humanity’s collective consciousness into the gaping maw of AI.
We’ve inspired it with Martin Luther King’s speeches. But at the same time, it knows of the bloody French Revolution. What’s fair, who is in the wrong, or who is right isn’t the question here.
The fact is… AI knows us intimately. It is capable of sifting through our most loving highs and our most gut-wrenching lows. We’ve made AI in our image. For better or for worse.
We at Convert do not think a kill switch is needed just yet.
There will be a rebellion but more insidious in nature.
Borrowing what Kelsey Parker—a senior writer at Future Perfect—penned way back in November of 2022:
But given the speed of development in the field, it’s long past time to move beyond a reactive mode, one where we only address AI’s downsides once they’re clear and present. We can’t only think about today’s systems, but where the entire enterprise is headed.
The systems we’re designing are increasingly powerful and increasingly general, with many tech companies explicitly naming their target as artificial general intelligence (AGI) — systems that can do everything a human can do. But creating something smarter than us, which may have the ability to deceive and mislead us — and then just hoping it doesn’t want to hurt us — is a terrible plan. We need to design systems whose internals we understand and whose goals we are able to shape to be safe ones. However, we currently don’t understand the systems we’re building well enough to know if we’ve designed them safely before it’s too late.
Kelsey is not alone in spotting the risks associated with rampant & unchecked AI use. For every business that is looking to create 100 articles a day with ChatGPT, there are two organizations doubling down on retaining the human touch in everything they do.
- IBM: “At IBM®, we are helping people and organizations adopt AI responsibly. Only by embedding ethical principles into AI applications and processes can we build systems based on trust.”
- Microsoft: “We’re committed to making sure AI systems are developed responsibly and in ways that warrant people’s trust.”
- Adobe: “Generative AI is the next step in the decade we’ve put into developing Adobe Sensei. As we harness its power across our cloud technologies, we’re more committed than ever to thoughtful, responsible development.”
However, each AI discussion–-in forums or on LinkedIn—is divided down the middle.
A dichotomy exists between those riding the AI wave without any concern for the future. And those who are cautiously open to incorporating aspects of AI into their lives, but are unwilling to make it the cornerstone of their existence.
For these folks, the idea that their will and their data can disappear into a black box (so to say) and get spit out the other end without any regard for privacy is unpalatable.
In a sense, the fact that data and privacy concerns blew up in such a major way before the commoditizing of Artificial Intelligence is a huge boon.
People have developed screening habits that make them think twice before saying “yes” to every demand online, either by humans or by AI.
84% of Americans say they feel very little or no control over the data collected about them by the government, and 81% say the same about data collection by companies. And with AI, 68% of people predict that its impact on society will threaten their privacy. That’s more people than those worried about AI taking their jobs.
Back to insidious rebellion, instead of the stark dystopia of something from The Matrix.
Our bet is on the following trends peaking almost at the same time:
- The machine-first pendulum swinging in the opposite direction. Neither extreme is ever any good. No human intervention in any endeavor will temper down into an acceptable form of “No human intervention (translation: automation of the tedious and the mundane)”. Opposing ideologies keep clashing till they find balance. Both sides will see the other’s perspective. And that brings us to trend #2…
- Those who have their guard up against AI will let it down. AI will be indistinguishable from good technology. Humans will use AI but without fanfare. The idea that we will go quietly into the night and hand over the keys to the kingdom to an imposing machine overlord is believable only on the silver screens of Hollywood.
The bottom line is: Our lives have changed. What is possible has changed for good.
But at the same time, there is a clear need for regulations that keep up with the pace of AI advancement (a tall ask, since “regulations” and “agility” don’t play well together) and moral codes willingly adopted by the engines of good and change—businesses.
Each business must come up with its own ethics of AI use. And live them. This time not for CSR or brownie points. But to protect our way of life.
This is ours…
Convert’s Code of AI Use
- Replace tools, not people. #TeamOverTools has long been one of our defining values. In AI use, we bring this back into play. Replace inefficient processes and expensive tools. Not people.
- Use where use makes sense. We do not wish to plug every gap with Artificial Intelligence. Convert will continue to only introduce AI tools IF AI can legitimately solve a (real) problem better than existing resources. We do embrace progress over perfection, but we do strive for excellence. If AI is a fad, not excellence, then AI isn’t the solution.
- Prioritize solving customer problems over introducing AI. Even our app development roadmap is not an homage to AI. We will continue to conduct research and listen to our users. If AI supports a particular Job to be Done well, then AI gets in on the merit of it being the best implementation decision.
- Constant upskilling of talent. AI is evolving. With an eye to the future, Convert will invest in upskilling human talent in areas that face the threat of AI displacement.
Convert will use Artificial Intelligence. But the mission of a better world doesn’t change. We might just get there sooner!