Love Droid Amy4 -- Chapter 6
This story is copyrighted by the author. All rights are reserved. This story and any portion of it may not be used or reproduced in any way or form without the express written permission of the author. If you read this story on any website other than on Literotica, it has been copied and used without the author's permission. Amy4 is based upon a real woman who is super cute, super smart and super sexy. She knows who she is. All characters are over 21 years of age.
It wasn't until we got home and took her new clothing out of the bag that I realized that we had completely forgot about undergarments -- no bras or panties. It didn't really make much difference since I was the only person seeing her and she didn't have to go to the doctor's office or join the military, but I thought that I probably should get her at least one bra and panty set. Pink. I think pink would look awesome on her. I made a mental note.
After shopping, I was more curious about what Amy4 knew, or had programmed into her, and what she didn't know. I was also really curious about how she could learn. She went from a pretty bad kisser to an OK kisser in a very short amount of time with no verbal feedback from me. How does this lady learn? What does she learn? Does she pay attention to and learn from everything she perceives? If so, she must have some pretty powerful microprocessors packed away in there.
I did some reading up on Artificial Intelligence (AI) and came across something called the Turing Test. It asks the question can you tell whether you are interacting with a machine or a human? If you cannot tell whether it is machine or human, then the machine has demonstrated intelligence indistinguishable from a human's and has therefore passed the Turing test.
I read an interview in the New York Times with a lady who is an AI expert. Her point was that computers can be programmed to do very difficult things like calculate flight paths to Neptune or determine the human genome, or how to design helicopters to function in a Martian atmosphere that's about one hundred times less dense than earth's atmosphere, things that would take humans a very long time to figure out. However, there are many things that are fairly easy for humans to figure out that are extremely difficult for machines. If you ask an adult whether all birds can fly, she or he would probably pause and think and they might say yes based upon the birds that this person had seen or had experience with, like robins and blue jays. Then if you ask if an ostrich can fly, the person would quickly remember that an ostrich is a bird and would then change the answer to no, all birds cannot fly. Ask a person if a bird in a cage can fly. They'd likely puzzle over it a second or two and then respond, well, yes but no; or, it depends upon how big the cage is. Machine learning has major difficulties with questions like this that require nuanced understanding.
So I thought I would try it with Amy4 and see what I could find about how she thinks and reasons. "Amy4," I said, "It is generally accepted by most societies that people should keep their homes clean and orderly. It's seen as a virtue that people should aspire to. When I was a kid, my parents taught me that I should make my bed after I get up."
"Yes, I understand this," she said. "Keeping premises clean reduces the risk of virulent bacteria or viruses in your living quarters and promotes good health. Keeping a home orderly permits persons to find things more easily and manage their affairs more efficiently."
"OK, good, you got it," I said. "So what about this? If it's generally accepted that making your bed is a virtue; is it OK not to make your bed after you get up?" Amy4 stopped moving and appeared to be processing my question.
After some seconds she responded, "There may be a slight increase in the risk of increased bacteria or viruses if the sheets are not exposed to sunlight."
I said, "OK, I think that's true but that doesn't directly answer my question - Is it OK or not to make the bed?" She froze again and this time for a longer period. She was stumped, so I thought I'd ask a variant of the question, "When is it OK not to make the bed after you get out of bed?" Amy4 was unable to formulate an answer to this new question and went completely quiet. It was almost like she had shut down. It took about 2 minutes before she started to re-engage. I wondered if all of her RAM had been used in trying to answer that question and it wasn't until she had been able to clear out some of her RAM later that she was able to process and respond. Hmmm, I thought, I'm going to have to try this with some co-workers and see what they say. I thought that most people could give you answers to this last question fairly quickly, such as if there is an emergency in the house and you need to leave, you don't have to make your bed. If you are sleeping with another person and that person is still in the bed, it's OK not to make the bed. If you plan to wash the bedsheets that day, it's OK not to make the bed.
The AI expert lady had implied in the NYT interview that answers to some questions like this depend on knowledge of social and cultural norms and situational expectations and awareness. People can figure these things out fairly easily because we do it all the time in everyday life. Machines cannot. The AI expert was not too concerned about machines becoming sentient.
This series of questions with Amy4 didn't really give me any insights into what she knew or how she thought, but it did help me understand a little better the kinds of questions that might cause her difficulty to respond and the kinds of things that were limited by her programming and method of learning.
One day I made a comment to her about someone who did something cool. I don't remember who it was or what it was about, but I did say something like "she's standing on the shoulders of giants." After hearing my comment, Amy4 immediately replied, "Yes, Isaac Newton was a remarkable man."
Then I said, "Yes, Newton was quite the guy. How does someone just invent calculus?" I meant it as a rhetorical question and was not expecting an answer.
"I'm not sure. Perhaps you should ask Gottfried Wilhelm Liebniz if he were living" she responded.