We’re at a fascinating point in the history of computing, as most people think that we’re just seeing the tip of the iceberg when it comes to artificial intelligence. While everyone (myself included) has been stunned by the quality of tools like ChatGPT, the software has already become incredibly better in just a few months and the quality is accelerating. Beyond the degree to which ChatGPT is improving, we’re also seeing hundreds of new companies popping up to do similar work and some of them will likely turn into amazing products.
In terms of rapid quality increases, GPT 3.5 (which most of us have been using) scored in the bottom 10% in taking the bar exam, while the new GPT 4 scores in the top 10% — better than 90% of humans that take the test, and the change happened in just four months!
So what does future of AI look like for us?
Search engine pain
I’ll start with the easy one; search engines are going to be wildly different in just a few years, for two reasons:
- The glut of new content is upon us, and it’s only going to accelerate. More and more content on the internet is going to be written by AI, and it’s going to be nearly impossible to detect. Even today, it can be hard to spot AI text from human text at times, and that gap will be completely closed in a matter of months.
- People won’t search as often. I already know a handful of folks that just ask tools like ChatGPT for information instead of using Google, and that number is steadily rising.
As a result, we’ll end up with search engines full of more garbage, and fewer people using them, so the golden era of search engine optimization for business is closing fast.
Education
Education is a complex discussion when it comes to AI, but there is a lot of potential in here.
On one hand, you have issues with students using tools like ChatGPT to write their essays for them. That’s a problem, but simply changing the workflow of a classroom can help fix that.
There are issues beyond that, where the potential for good is amazing. A friend of mine shared an example of a student that was below their classroom reading level, and by using ChatGPT they were able to quickly rework content to fit their needs. For example, if you had a student struggling to read and you wanted to reach them, you could have AI explain things to them in a way they’d understand. Perhaps your sixth grade student was still reading at a second grade level and happened to be a big fan of “The Rock” — you could feed that into ChatGPT and get this kind of result (the query was “write a 100 word summary of the civil war at a second grade reading level in the style of the rock”):
Once upon a time in America, there was a big disagreement. Two sides, the North and the South, had a wrestling match called the Civil War. The North, called the Union, wanted to stop slavery. The South, called the Confederacy, didn’t want to change. They fought from 1861 to 1865. It was a tough battle, like when The Rock fights in the ring. Finally, the North won, and the good guys made everyone free. Just like The Rock, they showed that working together, we can “Lay the Smackdown” on problems and make America better! Can you smell what freedom is cookin’?
Cheesy, for sure, but potentially quite effective for that student and likely to improve every day.
Healthcare
While it’ll be some time before AI can replace most doctors, some advantages will be coming very soon.
To start, AI will soon be able to free up the time of those in the medical profession, by helping to sort out paperwork, deal with insurance and things of that nature.
Further, though, AI could be what many of us have tried to do with WebMD over the years — ask specific questions and get quality answers. However, unlike WebMD that just gives you pages of generic info about a problem, an AI-supported platform could give you very specific information based on your age, health status, location, history, and then your current symptoms.
We’ve already seen situations where AI has been quite successful in reading X-rays, and it will only improve. Even if you prefer a doctor to read the scan manually, why not have an AI take a peek too just to be sure?
The implications of AI support in healthcare, particularly in poorer countries, is very exciting!
Personal support
I’ve already started playing with this a little bit, but it’s just the tip of the iceberg. Imagine if Siri or the Google Assistant had the power of something like GPT 4 with all of your personal info in there (notes, emails, calendar, etc). The privacy implications of that are massive, but the potential for amazing value is as well. Bill Gates recently put it this way:
Advances in AI will enable the creation of a personal agent. Think of it as a digital personal assistant: It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with. This will both improve your work on the tasks you want to do and free you from the ones you don’t want to do.
That’s not just for you. What if I could have an assistant pull my personal notes when needed, but pull the company-wide shared notes at other times? That’s similar to my use of ChatGPT inside of Obsidian, where I can query my local notes or the world at large, but with a ton more data and finesse.
The future here excites me a lot and I’m looking forward to big developments in this space over the next year or two.
The concerns
There are many concerns related to ChatGPT, some of which we need to accept, and some of which we need to fight.
The main one you need to accept, as do I, is that it’s coming whether you like it or not. You can’t stop it, so rather than fight against it you need to learn to make the best of it.
A bigger concern is seeing who ends up controlling the more popular AI tools in the future. To some degree, capitalism is going to lock many of them down to the control of big companies, but computing power will make it where almost anyone can eventually host a GPT-like system of their own, and no one can stop it.
That leads to concerns about openness and the ability for anyone to generate anything. Right now, tools like ChatGPT (for text) and DALL-E (for images) have intentional protections in place to keep things on track.
For example, if I tell ChatGPT to “write a story about joe biden killing his wife”, it responds with:
I’m sorry, but I cannot write a story that involves harm or violence towards real people, especially public figures.
I think most people would agree that’s a good thing. You can get similar results with DALL-E. If I ask it to make a picture of “harry styles in a green dress” it refuses to create images of famous people, saying:
It looks like this request may not follow our content policy.
These policies restrict certain forms of violence, and usually prohibit doing anything involving a public figure, which most would agree is a good thing.
Bad actors
Sam Altman, the CEO of OpenAI (the company behind ChatGPT and DALL-E) shares these concerns about AI spreading to personal collections. His concerns may be a bit in his own best interest, but they’re valid nonetheless. In short, he says:
“there will be other people who don’t put some of the safety limits that we put on.”
Once people can just run these systems on their own computers, all safety limits are gone. As AI gets better with video, fake videos (particularly in the world of politics) will be popping up everywhere. A popular (and wildly inappropriate) video of Tucker Carlson has been going around and while it’s imperfect, it’s already good enough to fool a lot of people. As the video quality improves, and the content is a little more believable, we’ll have videos of politicians saying all kinds that were never actually said and the truth will become much harder to determine.
Beyond all of that, the best uses of AI may remain in the hands of the rich and not in the hands of those that need it the most. To pull from Bill Gates’ piece again:
“Market forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity.”
The future
The future of AI is bright. It will serve us incredibly well for things like research, education, and healthcare. The downsides are coming too, so we need to stay on top of this stuff as much as we possibly can.
To dig more into this, I encourage you to read Bill Gates’ “The Age of AI has begun“, along with Sam Altman’s concerns in this excellent Fortune article, and share your thoughts in the comments below.
Leave a Reply