Posted: June 23rd, 2022
Topic: Androids and the Mind/Body Issue
For your thread: Read the synopsis or watch the episode. Write a substantive response (at minimum 350 words) and then post it to the forum.
Your thread should address the first question.
Although you may address other questions as well, your main focus should be on the first.
1.Based on Hasker’s reading and the categories he uses what view of the mind/body issue do you believe Picard has?
Your answer should be supported.
2.Maddox identifies three criteria that make a being sentient: intelligence and self-awareness.
Are they sufficient?
These are sufficient. Can you think of other characteristics or properties that a being must possess to be considered “person?”
What could they be?
3.Does artificial intelligence have the potential to be as advanced as that described in the story?
Why or why not?
4.Does Maddox believe Picard is being irrational or emotional in his view about Data?
5. Do you agree or disagree with the final decision of the JAG officer
Why or not?
To reply, read the other threads from your group members and identify a person you disagree with.
You should write a substantive response explaining why you disagree.
(I will submit the thread once you have provided it. Only then can I see any other students threads.
After this is done, I will send you a student thread that disagrees with the one I have submitted.
Next, you can send me a substantive reply of 200 words.
Picard’s position after reading the Hasker text is that Data is Data’s friend, and Data is a person who should be free to make his own decision (synopsis).
His views on mind/body suggest that the mind is not an independent entity.
Accordingly, he believes that there is a materialistic nature.
Picard suggests Data is a human in his way.
Data must be able to think like a human in order to be human.
Data, like all humans, also has a brain that can think.
Data’s brain is also capable of operating a self-operating machine (Hasker, 1983).
Picard and Maddox have different views about the mind/body problem. This is why there is often a clash of the two.
Maddox views both mind and body as having a dual nature. We have brain and mind (Hasker 1982).
Maddox believes that human minds are dualistic. Data has only the brain but does not possess the mind.
Machines lack the same soul, but humans possess them (synopsis).
Maddox stated that these three traits are what make a person a human being.
Each person has these characteristics that can be described as human traits.
To be considered a person, intelligence, self-awareness, consciousness are the only requirements (Kosinski & Graepel 2013).
It is clear that artificial intelligence will be possible in the future, as shown in the story.
Google Assistant is used to create android phones. The Google Pixel 2 XL, which was released in 2017, can respond to voice commands. It can also recognize ambient music and can identify photo content.
There are many possibilities.
This case is one that I am going to support Maddox. Data is simply a machine with humanoid features.
Data’s arms are easily detached and can be shut off with a push of a button.
Data’s ability learn from his surroundings and adapt to them is amazing. But that is exactly what Data was created for.
Picard is attached because of Data’s ability to learn from the environment and adapt accordingly. It is natural that Picard will be more attached to Data if they communicate with each other the most.
Picard must be able to see that Data is a machine, and that it cannot be more.
Yes, I fully agree with the JAG Officer’s final decision because of the absence of absolute proof.
It is essential that we always take ethical and morally sound decisions.
Data, on one hand, is humanoid and has formed a close relationship with Picard.
Data is a machine with some of the amazing capabilities that humans have, even though it is a machine.
JAG officer brings up a great point with the statement “.
We’ve been dancing about the fundamental question: Does Data have soul?
I don’t know if he has.
I don’t know that I do.
But I need to allow him to ask that question.
Because machines don’t possess souls, they are not subject to the same ethical standards as humans. They can be sent to hell for their wrong doings but also be glorified to heaven.
As we don’t give animals a better treatment than we give humans, so we should not treat machines as we would humans.
Reply to The Thread
Sydney Jones, while I appreciate your thread, I must disagree with you. Here’s why.
The episode shows a very different kind of Artificial Intelligence to the one we have today.
A.I. was a long process.
A.I. can adapt to their surroundings quickly and are intelligent enough to learn from them.
A robot’s hand can mimic human fingers, and a camera based on A.I is capable of understanding the depth of a field so that it blurs the background.
There are many robots that are A.I-based and can interact with humans and understand emotions.
There are many possibilities. In the next few years, we may see A.I perform more difficult tasks.
The subjective nature of how you treat an A.I. robot is another matter.
A materialistic person could treat an A.I. with kindness, while a spiritual person might not.
This means that it all depends on the person’s mindset.
However, you said that we shouldn’t treat A.I.s the same way we treat animals.
Sydney, I’m not sure what you mean. After all, animals have a human life and are essential to the natural world.
However, A.I.s are created by humans to perform a pre-programmed task.
Metaphysics: Building a World View. Downers Grove, Ill.:InterVarsity. 1983.
Chatbots, the new world in HCI.
Digital records of human behavior allow us to predict private traits and attributes.
Proceedings of the National Academy of Sciences (110(15), 5802 – 5805).
Star Trek: The Next Generation.
Television Synopsis. February 13, 1989.