AI And The Future of Humanity

Many years ago, I wrote a post about the future of work and the eventual need for a Universal Basic Income, due to the advancements of technology (robotic and AI) that would make human workers obsolete, or near obsolete, in various fields. Since that time, we’ve had illustrations of real life scenarios that UBI does work, and in fact makes the humans that are on it more inspired and creative. And, in a less positive light, we have very recently seen that AI is accelerating its takeover of humanity. Is that hyperbolic? Maybe. But I’d prefer to say it’s accurate.

In the last few months, there have been several cases of AI disrupting human spheres. We saw it in the form of “art” AI, which created artwork from text prompts input by humans. This immediately brought out an outcry, due to the artists that the AI had been trained upon, rightly pointing out that the AI was stealing their styles and giving nothing back. Now, has inspiration always been a nebulous and controversial aspect of the art world? Yes, indeed. Where does inspiration cross the line into plagiarism, or appropriation? These have been questions that artists have debated for years. But I believe it clearly crosses over into plagiarism when an AI bot wholeheartedly shoplifts an artists entire artistic DNA, with no attribution or payment. As though artists don’t struggle enough to make money after spending so much time and effort pouring their love into their craft…now they have HAL 3000 shoplifting the pooty too? When it takes literally the amount of time it would take to wish for a Djinn to make art of the subject of a wisher’s choosing, and in the exact style that a wisher’s favorite artist projects, we have a serious problem. This is already rocking the artistic industry, in that game studios are laying of pre-visualization artists (always one of my favorite kinds of modern artists, known for sweeping vistas and meticulously rendering fantastical worlds, among other things, and required to be intensely talented) in favor of using AI to generate pre-viz art. And will the money that is saved by axing those talented artists, serve to do anything but make the fat cats at the top further increase their financial BMI? Forgive me for being skeptical, but I doubt it.

In the same vein of AI disruption, we also have similar bots that are taking over the roles of writers (I promise this post has been poorly written by a human, and should handily pass your Turing test. Here’s a typoo to prove it) by using small prompts to write articles and stories. And ChatGPT is also coming for the techies themselves, in that it’s making large swathes of code writing obsolete, and can be driven by AI instead to write code in a tiny fraction of the time (admittedly it takes some handholding, but is FAR less demanding than the traditional code writing procedures).

So in the space of a few months, we have had huge disruptions to labor sets. And what’s unique about this particular tech assault, is that it’s upon traditionally highly skilled and learned roles. This isn’t just a robot replacing a hamburger flipper; these are AI not only replacing many people who have advanced degrees, but stealing their own work as the very means to do so. And in a particularly insulting twist, the AI is orders of magnitude faster at all of these things than a human could ever be.

So what are the implications? Well, looks like we are heading full long towards the part of late-stage capitalism where the 200 rich people with all the money leave Earth, and leave the rest of us underlings behind after we’ve been sapped dry of all our AI-stealable skills, to squabble over our Soylent Green crumbs. I kid, I kid. Sort of. To be sure, AI can be a tool to help us achieve great things. But it can also carelessly be used to irreparably damage us. And that sure feels like where we are right now.

To me, if the powers that be aren’t hearing alarm bells about the future of humanity and what we do, they just aren’t paying attention. Which, given the political antics of the last several years, wouldn’t be all that surprising. But let’s choose to be optimistic, and believe that there ARE people in power who want to help the human race still (and that they aren’t just the sorts who appear to be interested in that, until they can buy Twitter and show that they’ve chosen to throw it all in with the loonies instead. Ahem.) I believe now is the time to begin to throw together a plan to do something BEFORE everything gets awful for once. And what could they do? Well, I have three ideas that could mitigate some things.

  1. Clearly we need to make it where if an AI dataset is used to train a plagiarism bot, the original artists/writers/etc need to be paid royalties from the organization that has created that plagiarism bot. Both upon initial usage, and repeated royalties upon reuse. The good thing about AI, is you have metrics that will tell you that information down to the Nth degree.

  2. For every job that uses AI or robots to displace a human, there should be a heavy tax on that position, used to fund UBI for the people that have been displaced. That will make it so that digital slaves aren’t immediately so appealing, for doing the work of humans for the mere cost of developing them in the first place, and then making them Legion. It will also help to fund UBI for those people, so that they can retrain or otherwise lead fulfilling lives and contribute to society, instead of becoming destitute. I know that there’s the whole argument about “every time there’s a new technology, the people tied to the old one cry doom”, and that’s true. But when cars replaced horses, the people that shoed the horses could learn to work on cars. AI takes jobs, and leaves a vacuum that only AI moves into, at least for a huge ratio. And that just hasn’t been equitable at any other point in history.

  3. For my most popular idea, tie the income of the people at the top of companies to the income of the people at the lowest tier. The C-suite characters would no longer be able to reap all the rewards from taking the profits from the humans that made the products, as we so so much now. Let’s say, no one can make more than 20 times what the lowest paid workers makes. I don’t buy that anyone can work 20 times harder in a position than someone else, and so many of our deep-seated problems come from the hugely disparate income inequalities that have occurred over the past several decades. If the people at the bottom of the pyramid are taken care of, we all prosper.

  4. And as a freebie, let’s make it so that some projects are inalienably achieved by humans. We didn’t come this far just to let Pong’s great grandchild take it all away.

Are these ideas perfect? No. But at least they are something to talk about, which I am not seeing Congress do yet. And they need to start, yesterday, before this problem gets out of hand. Because it will, and very quickly. We are on the precipice of being able to make this world so good for so many, if we just continue to proactively take steps to solve problems before they are too late.

Welfare, and the Relevant Conversation We Need to Prepare Ourselves For

I like to try to approach things with an open mind, and learn and adapt when confronted with new data. It's a human foible to want to cling steadfastly to an idea, and I try to release myself from it whenever possible. In the last few years, I've realized that I need to adapt my way of thinking for a different human scenario that we are likely to be confronted with sooner than we think. And that has to do with the welfare state, and, as so much that I talk about, technology.

A theme that I harp on quite a bit is trying to live a balanced life in a world that is increasingly aslant. I've written several posts that revolve around that idea, and I'm sure I'm not done yet. Give me a topic and I'll yammer on, trust me. But essentially a few of the ideas for leading a fulfilling life in a world that is doing everything it can to distract us from that are: 

  • Try to move around every day. Depression hates a moving target. Do some pushups, do some squats, walk and run, jump rope. Any motion is better than no motion.
  • Create something physical whenever possible. You don't have to make a masterpiece, and as any artist can tell you, creation is mostly about failures. But the satisfaction of having something tangible will satisfy something in you that nothing else will. Hence my candle business for example.
  • Avoid creating an echo chamber that reinforces your own opinions. Seek conflicting information and try to learn. 
  • Read! It doesn't matter what. Read cookbooks if you want. Read science fiction (we all know I love it), find history books that engage you. Just read. It engages your mind, as does exercise, and helps fight off mental apathy.
  • Keep moving forward! Set goals and work towards them. If you don't, all of a sudden you'll realize 20 years passed without accomplishing anything. It's never to late to start making goals and growing. Make sure not to only set large ones, either. Set small ones as well so you are always seeing progress somewhere; that way you don't get frustrated and bored on the way to meeting large ones.
  • Move with kindness and compassion. Stand for your convictions, but remember everyone has arrived at different points in life due to a lot of circumstances, and be compassionate. No one ever looked good by berating someone who was in pain. Help others whenever possible, both through charity contribution and through charitable acts.
  • Surround yourself with intelligent people who motivate you and do good things. Understand that no one is perfect, but having people that inspire you to live a better life will help insure that you live a better life. I'm blessed to know some of the most inspiring people on this planet, and I'm not being hyperbolic. 

That's just a partial list, but it's a good starting place. It doesn't take a lot of reading between the lines to see that I'm someone who believes in working towards things. And as such, as you might imagine, I believe that government handouts and welfare are a two-edged sword. I absolutely understand that there are circumstances in which people are thrown off-kilter and can't get back on their feet alone. This life is not easy on any of us, but it's also harder on some. While I believe that it's essential to help others and be compassionate, I also think that large government offices established to fulfill that role are not ideal. First of all, I've been working in the government for the last decade, and can fully reinforce how much waste of funding there is when there's no capitalism involved. I've seen plenty of chairs filled for the sake of securing funding. 

The other problem with large government offices doling out....well, the dole...is the atrophy to the recipients over time. It's a truism of human nature that if we don't have to work for something, eventually, we don't appreciate it. Even if at the beginning we did. We start to see it as something that we deserve, and desire to give nothing back. That feeling is really a poisonous one, and I'm not blaming the people who fall into it. I'm not shirking the responsibility of the receiver here, but I really think it's just a symptom of depression, when we don't feel we're contributing. And it can make us selfish and petty. So I'm a big fan of welfare programs that also give the recipients a purpose. The New Deal building programs changed over time into something that I applaud less, but the initial programs that gave us dams, parks, schools, and much more, built by citizens who were crushed beneath the Great Depression, had a lot that I liked. People didn't atrophy due to lack of motivation; they could see that they were doing something for their community while they got off their feet. That was huge.

So where am I going with this? Well, I've realized something. I'm not the first or only person to think about it, but I like to talk here about things that are rattling around my brain. The last several years have had extremely heated conversations about "handouts" and welfare state behaviors. Belief that work and the feeling of contribution is essential to mental well-being obviously puts me on a certain side in the current conversation. But that conversation is about to go through a fundamental change. 

Within the next decade or so, humanity is going to go through another fundamental upheaval. The population of the earth has ballooned since the 1730s from less than 1 billion to more than 8 billion. As that change has occurred, simultaneously there has been a huge leap forward in nearly every aspect of life due to technology changes. We are quickly approaching a nexus yet again. Within the next one to two decades, most jobs are going to be able to be taken over by AI programs or robots of some kind. That's not just true of the assembly line and customer service jobs that we already see (and as we talk about raising the minimum wage, the rush is coming faster to replace lower skill jobs with cheaper robots that don't require things like healthcare). That also applies to many of the higher level jobs too. Drones will soon be in everything from package delivery, piloting aircraft, driving trains and our own cars. Even now programs are intelligent enough to write NEW programs that can outperform programs written by humans. There are programs that are intelligent enough to design and create music, too. AI is going to be able to out-human us in nearly every human endeavor. That's frightening, but quickly coming closer. Brave New World, indeed. Although a perk that I see is that the outsourcing problem America has had will most likely quickly go away. While human personnel in India and China have taken many production jobs from the US, that will become an obsolete problem. While currently paying an overseas worker much less to than an American to do a job has undoubtedly affected production stateside, the price difference between an overseas robot and a US based robot likely won't be much different (at least not for long), so perhaps we'll see a lot of production return home. I hope so anyway.

So the "welfare" conversation is about to change. Suddenly, instead of requiring people to "find work", even those who would seek work of their own volition, are going to have a hard time finding a role. And in that scenario, we can't just allow millions of displaced workers to starve to death because they refuse to find a job. There just literally won't be enough jobs. It won't be a matter of cross training or getting a new field. Workers won't be needed. So what do we do? Humanity will fundamentally change. It will be necessary to have a basic universal wage for all the displaced people.

What I HOPE is that we will essentially end up in a "post-economy economy", of sorts. In the utopian view, it would be a situation in which people figure out how to adapt and happily live when their work isn't "necessary". My recommendation is that we all look into how the nobility of England occupied themselves in the 18th century. Learn languages, learn to draw, take exercise...maybe the waters at Bath, haha....read and write. Engage ourselves. But what I worry about is that we'll basically just stay heads down and argue on the internet even more. 

The stigma of "not having a job" is going to be much more universal before we know it. And we'd better learn how to adequately prepare ourselves for that era. Learn to be happy and make work for ourselves that doesn't result in wage, but satisfaction in other ways. Because those days are coming. Hopefully we can adapt and become more balanced, rounded, and embrace that change. The conversation will need to.

 

Post script for further reading: If you'd like an interesting book to read about rapid technology change and its implications for what it means to be human, I highly recommend "Future Shock" by Alvin Toffler. It was written in 1970 so it's not 100% accurate anymore (there have been even more leaps since then), but it's a good jumping off point for contemplation. "Brave New World" is a good cautionary tale about how to be the wrong sort of leisure class, but it is disconcertingly easy to think about it going that way.

Dark Net, human frailty, and the race towards making ourselves obsolete

I just read a really great article at Vanity Fair. Much of their content is drivel (I'm not huge into what the robber barons of the age are wearing or eating, so I skip those parts) but I find that I'll unexpectedly run into very well-researched and thought provoking articles on issues that fascinate me. In this case, the article that excitedly jumped into my lap like an enthusiastic puppy is Welcome to the Dark Net, a Wilderness Where Invisible Wars are Fought and Hackers Roam Free.

In the very beginning of the article is this quote from the main interviewee (a hacker who is amusingly referred to as "Opsec"): 

"He is a fast talker when he’s onto a subject. His mind seems to race most of the time. Currently he is designing an autonomous system for detecting network attacks and taking action in response. The system is based on machine learning and artificial intelligence. In a typical burst of words, he said, “But the automation itself might be hacked. Is the A.I. being gamed? Are you teaching the computer, or is it learning on its own? If it’s learning on its own, it can be gamed. If you are teaching it, then how clean is your data set? Are you pulling it off a network that has already been compromised? Because if I’m an attacker and I’m coming in against an A.I.-defended system, if I can get into the baseline and insert attacker traffic into the learning phase, then the computer begins to think that those things are normal and accepted. I’m teaching a robot that ‘It’s O.K.! I’m not really an attacker, even though I’m carrying an AK-47 and firing on the troops.’ And what happens when a machine becomes so smart it decides to betray you and switch sides?”

The entire article is well worth a read if you're into Information Security, threats, or learning about those parts of society that still operate like the Wild West. Spoiler alert: I am fascinated by all those areas, so I think this is one of the best articles I've read this year. The blurb above sucked me in hook line and sinker. It tickled the part of my brain that enjoys these future foe tangents, because I think what he's talking about directly addresses one of the factors that we seem to avoid allowing our collective consciousness to linger on too long. 

If you're a regular follower of my blog, you may have surmised that I am basically governed by two large parts of my personality: misanthropic Luddite, and social technophile. Yes, that's conflicting. Yes, I'm aware of that, and I'm also comfortable with duality. It allows me to evaluate and contrast a lot of arguments in my head, and that's one of my favorite past-times. You never know what you'll find kicking around this old noggin.

The quote about AI sentinels, and AI sentience, articulated a very interesting modern problem. We love relinquishing power to technology, as a species. That's what originally set us apart from the animals. There is evidence of the use of tools from tens of thousands of years ago, and we haven't stopped with that innovation since. Clearly there was a large leap forward during the Industrial Revolution, and it's just continued on an upward trajectory ever since.

What's frightening is that we are quickly closing on the nexus of when we will be able to accurately control those tools, and when they make us obsolete. In a Genesis way, we have created AI in our image, and our child is rapidly moving towards establishing its own predestination. It's no secret that I actively fear AI overtaking us, because in a binary, numbers and logic way, it's not too hard to see that in the very near future machines with no God given conscience would be able to come up with cold logical reasons that we don't really need to be here. We take a lot of energy, we are messy, and we are frequently inconvenient and illogical. In a world of machines, it's easy to see how they would write us out of the equation. Is that an alarmist idea? Well, sure. But if you want to be prepared for the future, you need to look at all possibilities....even the dark and uncomfortable ones. In a system meant to adapt and learn to evolve efficiencies, we are most likely to be the least efficient part of the system. Already ghostst in the machines have evolved to make their own logical leaps in different lab tests. When we relinquish too much power, what's the end game?

In the Vanity Fair article, I particularly enjoyed the CURRENT projection that he comes up with. I've done quite a bit of speculation in my head about what's going to happen in the 5-10 year range, but I enjoyed having the real-time mirror held up in this illustration. In the last several years there have been numerous, very terrifying security breaches in the shadow world. The average person probably doesn't think about them too much, because the data breaches are so large and so frequent, and there's also that good old "This is scary on a huge level so I better not think about it" response. Usually we just see it as a news blip, and maybe a prompt to change passwords. But what has happened is there have been several large breaches on a level that could really be devastating to a lot of American citizenry. Between the health industry breaches, the OPM breaches of the government on its most secretive workers with all their most sensitive data, and the frequent hacks of financial institutions....and those are just the ones we've actually heard about...someone is amassing a lot of data for a lot of nefarious reasons. It's not a big leap to assume that there is some sort of dossier being compiled on most people, and that data isn't being kept to safeguard us. (Since I am already at tinfoil hat level here, I'll throw out my favorite advice: always have a kit, always have a plan, and always be ready).

The AI drones that Opsec speaks of as being the sentinels of the systems, and their fluid moral codes (if interfered with at the proper time in the learning process) are exactly the sort of moral gray area in our AI work force that I'm talking about. When we are creating our own little bot armies of white knights, but they themselves have no sense of light or dark, that sword can easily and nefariously be turned against us by the wrong people. And they are. Stuxnet is one of my all time favorite intelligence stories, and that was assumably executed by white knights. But now what are the black knights doing? And when the soldiers that we send out into the battlefield are no longer flesh and blood with some sort of assumed shared moral code...but instead hackable bots...that changes the battlefield entirely.

As the world of AI and computers has become more global, the control of who owns the top players has quickly changed. And as we here in the US focus more and more on the media game of misdirection (insert your pet #HASHTAGSOCIALFRENZYCAUSE), we get more muddled and forget what we are doing. It's easy to form our own echo chambers and ignore the world at our doorstep, and there's solace in pretending the wolves are at the door. The more we shout at each other about manufactured crises inside our warm homes, the more we can try to block the howling of the wolves outside. But when a bit of silence falls in our lives, when we are alone falling asleep, when our batteries on our devices have died or there's not a game or reality show flickering to put us into soma relief, we know deep down that someone somewhere is amassing to take things from us. As much as we pretend like it, most of the world is not like us. Most of the world has vastly different moral codes than what moves us in the US, and there are plenty who want what we have. Particularly as weather patterns and things like water availability affect other players in the big scary human survival game, like disease and food. No matter how accepting we want to be to each other (which I support) there are going to be nation states that will not EVER accept us. And while they may or may not be able to get warheads or fighter jets or thousands of soldiers....they likely CAN get access to the internet. And they'll fight that way. Look at the cyber caliphate army, ISIS hacking division. The battlefield continues to evolve. And we need to be aware of that.

So, what is there to do? After all, we are all just players in this game at the most basic level, when it gets down to it. I think one of the biggest things is to be aware. Look the wolves in the eye and make sure you're aware of their existence. Can you do anything about financial monoliths or energy companies getting hacked? Most likely, no. But you CAN be a good steward of your own information. You can make sure to know how to handle yourself in an emergency. You CAN make a plan to make sure loved ones know where to go if there's a power blackout or the cell networks go down. And finally, try to take time to unplug on your own sometimes, and remember that we don't need technology to handle all things in life. People don't need to get a hold of your every minute. Step away and remember how to be a full human, and get used to that idea. Appreciate what we have and the experiences that we are getting, because we are lucky to be here.