A couple of months ago at a lighting store where I saw a sign that said, “Lights, 25% off.” The sign wasn’t entirely accurate as about 50% of the lightbulbs were off.
Do you ever wonder what goes through a lightbulb’s head when you’re about to turn it off? Please say no. That would just be silly. It’s just an electrical device that we created to give us light. Lightbulbs have no feelings; they have no cognitive thought.
However, humans are getting much closer to developing electrical devices with actual cognitive thought. Many experts predict that within 40 years, we will have self-aware artificial intelligence. They already have artificial intelligence that look and speak like real people do.
We don’t hesitate to flick on and off a switch on a lightbulb; we feel no pain letting all the light from the device dissipate into darkness. How would you feel turning off a robot that you could relate to?
The higher the capacity or ability an animal has of communicating or bonding with us, the greater value and the more attached we are to those animals. That is why our society views the consumption of different kinds of animals differently. People who are willing to eat cow meat may not be willing to eat cat or dog meat. In a sense, we are more willing to switch one animal off than another.
Machines already have the ability to measure our preferences and to feed us images and information that we are more likely to be interested in. The more time goes by, the more ability we have to build robots that are able to communicate with us at a level that we want to. So it’s not unthinkable that we will have robots that we become attached to. How would you feel turning off that kind of machine?
I’m guessing you would still have no problem turning off the robot. After all, a robot isn’t a living thing like a cat or a dog. But what if it was? What if the robot was self-aware, able to think for itself, and had its own preferences and desires? What if it had intelligence that surpassed our own, and it didn’t wish to be turned off? Would you still be easily able to press the button to power it off?
The more human-like something is—the more intelligent it is or the more ability it has to relate to us—the more likely we are to make moral judgements regarding it. That is why someone can get charged for animal cruelty for torturing a cat, but not a spider. This is a healthy mindset, because we are using humanity as a measure of value. We need to, because we need to believe that we have value. So it’s only natural and reasonable that moral questions regarding artificial intelligence are raised the closer they come to our level.
Personally, I believe that it’s great that we have moral concerns on how we treat other intelligent life forms. While I don’t necessarily believe that the act of harming an animal is inherently wrong, I do believe our willingness to do so says a lot about the condition of our heart. If we are so willing to do harm to something that we can relate to, that may be an indicator of our feelings regarding other human beings. Whether we’d like to admit it or not, other human beings are the ones that we can relate to more than any other thing.
We’ve grown accustomed to using humanity as the backbone of how we make judgements. That can be great and helpful, but it can also be very limiting.
Even if we prefer to think otherwise, we humans are not the highest form of intelligence in play in our lives. God is. Many of us realize this, and we do our best to relate to God, which is great. However, since we are conditioned to relating to others on our own level, we try to do the same thing with God. This eventually leads to us judging God according to His treatment of us.
I’ll be the first to admit that there are many things that God does that I don’t agree with. I believe that this part of our relationship is natural and there’s nothing wrong with it. However, I often cross the line when I become angry or resentful towards God. I let my disagreements with Him go to the extent where I make the judgement that what He is doing is wrong. That is the part that is wrong. God is so much higher than us, and He has the right to do what He wants in relation to us since He created us.
If I developed a robot to do as I wished, and I gave it the ability to make judgements for itself, I would say I had the right to turn it off because that ability to turn it off was part of my purpose in its design. Its complaints may affect me on an emotional level because of its ability to communicate with me, but that wouldn’t change my inherent right to do with it as I wished.
Look at the difference in intelligence between a super-intelligent computer and a lightbulb. The difference in intelligence between God and us or anything we could create is even more vast.
I’m not trying to paint the picture of an uncaring God playing with us like we play with toys. God created us in his image, just like we are creating robots in our own. That means that God has designed us for the purpose of relating to us. God wants to relate to us. God wants to have a relationship with us.
Just as the more we are able to relate to something, such as with a dog or a robot programmed to our preferences, the more we are pained to take life away from it, so God is affected at a greater level when something bad happens to us.
I began this post with a question of what a lightbulb may say before it is turned off (actually, I began the post with a lame joke, but that’s besides the point). What may be a more appropriate question to ask is what the lightbulb would have to say after being turned on. After all, if you are reading this, it means that you have been blessed with the opportunity to live a life according to purpose for which you have been made. Not only has God not yet hit the “off” switch on you, but He has in a sense powered you on. I don’t know about you, but to me there’s no greater joy than being able to fulfill the purpose for which I have been made.