Do you think AI is, or could become, conscious?
I think AI might one day emulate consciousness to a high level of accuracy, but that wouldn’t mean it would actually be conscious.
This article mentions a Google engineer who “argued that AI chatbots could feel things and potentially suffer”. But surely in order to “feel things” you would need a nervous system right? When you feel pain from touching something very hot, it’s your nerves that are sending those pain signals to your brain… right?
Really? I mean, it’s melodramatic, but if you went throughout time and asked writers and intellectuals if a machine could write poetry, solve mathmatical equations, and radicalize people effectively t enough to cause a minor mental health crisis, I think they’d be pretty surprised.
LLMs do expose something about intelligence, which is that much of what we recognize as intelligence and reason can be distilled from sufficiently large quantities of natural language. Not perfectly, but isn’t it just the slightest bit revealing?
There is a phenomenon called Emergence, in which something complex has properties or compartments that its parts don’t have on their own.
In programming, we can see that software displays properties or behaviors that its languages alone don’t have.
If an AI demonstrates true consciousness, a major change will occur in all branches, including law and philosophy.
Do you mean conventional software? Typically software doesn’t exhibit emergent properties and operates within the expected parameters. Machine learning and statistically driven software can produce novel results, but typically that is expected. They are designed to behave that way.