Free Won’t

From what I understand about artificial intelligence (AI), in particular, today’s known software programs like Google’s DeepMind. The software can learn from the experience of repeating a task, such as playing a game, driving a car, or reading lips.

sentinel

The more the AI program practices the better it gets and practice it can! By playing against itself over and over very quickly, always learning and improving as it goes. Before you know it, the AI is better at certain tasks than humans are. Not only that, let’s say that one AI program learns something new after repeating these tasks millions of times over, this program can then pass this new learning to all the copies out there playing games, driving your car, or reading lips (or whatever else it might be tasked to do).

It seems clear to me, that the AI program that drives a car must have certain ethical ‘rules’ programmed into it. For instance, a rule such as: “Don’t go crazy and run over every person that you see.” Most of these rules are pretty obvious, and are probably not something the AI would not follow even by accident.

However, what about the more subtle rules? Have these been written yet? Imagine rules for a scenario such as, “When traveling at full speed down a highway and suddenly a crowd of people run in front of the car.” Of course, the default action is to stop.  But what if this distance is less than the breaking distance, and the only other option is to swerve off the road and potentially kill the passengers in the car?  What decision will be made, if any? Is it simply a matter of counting how many will be killed and picking the best option? Is the AI program always going to make the same choice? Are car passengers entitled to override these settings? Can you make these choices in the admin settings or do you have to hack the software? Unfortunately, it seems like the ethics have to be ‘hardwired’ and practically unhackable. For instance, if the ethics routine isn’t verified as ‘secure and working’ the program can’t run.

Of course, this might all be moot. If these AI software programs can be used to drive cars, I’m sure they can just as easily drive war machines. They’re just automatons following instructions. They aren’t ‘driven or motivated’ they’re simply repeating a learning process to perform tasks the best they can.  You still need to define what’s ‘ok’ and what isn’t in the parameters of the program.

Currently, the ethics parameters with rules such as ‘don’t drive into people’ isn’t something that changes as the AI program learns how to master the task of driving.   The parameters of what is ‘ok’ to do or not do cannot be changed by the AI learning software, it is only controlled by the program’s design.  The AI program follows the rules but can’t change them.  However, I could imagine a world where the AI software would be designed to have the ability to learn and improve upon its own ethics routines.  Perhaps these ‘improvements’ would be small incremental changes, but over time, after millions of interactions and experiences, could these routines become something completely different than what it started with?

It’s funny how we expect these AI programs to have better ethics than the average person has but these programs will only be learning based on what we teach them to start with.  If we even bother to teach them anything.  It’s nice to imagine that a self-learning AI program would come across some pictures of cute kittens on the internet and then suddenly ‘learn’ some ultimate truth that ‘love conquers all’ and somehow it ‘knows’ all about right from wrong, but the reality is that the software might not measure these morals the way humans do at all.  For instance, the machines might think organic beings are irrelevant or that some micro-organism in the ocean must be the dominant life form on earth (at the cost of everything else).  It sounds like a crap shoot to let machines figure it out for themselves, not that I wouldn’t want to listen to what insights they might have, but I don’t want my automated car telling me that I shouldn’t be driving my car at all, that I should walk even though it is raining.

Even if we taught these programs to have perfect ethical sub-routines (whatever that might mean).  Will these programs ever have ‘free won’t’?  Can AI really make choices if it isn’t ever given an option to choose?  Sure, a developer might allow a software program to ‘randomly’ do or not do something, but that’s not what I’m talking about. I’m talking about actual choice. The choice to do nothing. If you don’t have ‘free won’t’ you clearly do not have free will. Of course, if the software just does nothing all the time, it’s probably not very good software or it’s conflicted about something and one of the routines will need to be reset or something.

I guess, what I’m implying is that you choose not to do something when part of your spirit says, ‘No wait, despite what I’d like to do, and perhaps am very good at doing, let me reconsider and instead do nothing.’ You can probably visualize a similar situation, for instance, your first reaction after being physically assaulted might be to respond in kind and sometimes well we just react the way we do and hope for the best. We’re only human after all, I know I am.

The thing is though that, just like a learning software program, whether or not I choose to do something, is most likely due to all my previous experiences. You can imagine that an ‘aggressive’ version of who I am might quickly develop if I were always under some physical threat. I might be more prone to swearing or yelling at people for instance.  Even if I started off as a nice peaceful dude, under the wrong circumstances my attitude might change.  Hopefully my ethics would not change no matter what the circumstances, but that sounds more idealistic than realistic.

I imagine it can be argued that AI software programs have no ‘real feelings’ even if programmed to emulate ‘real feelings’.  That’s why machines are so much better than us ‘mere humans’.  Well, that and because computers are so much faster and never make mistakes (unless they’re programmed to make mistakes or simply have mistakes in their code which also sometimes happens).

Just for the record, I’m against any sort of automated killing machines, even the ones that are made ‘by accident’.  Of course, most of the ‘bad machines’ are usually controlled by people.  In fact, to my knowledge all machines are still under the control of people even the ones that run intelligent programs that can read your lips.