It almost sounds as though you're suggesting (or leading the reader in this direction) that whether something is autonomous isn't really useful, but maybe degrees of how autonomous a thing is, makes more sense. This seems like a framework that could be more useful for us: instead of a Turing Test, which could theoretically never ever be passed for the reasons you suggest here, we could describe system A as "level 3 autonomous" or something, and system B as "level 4 autonomous" and so on.
Don't ask me for any details, though! I'm more of an "idea guy."
I throw an idea out there and run away quickly when the going gets tough.
Good launching board for thoughtful conversations here, Michael! I hope others take this opportunity.
Honestly, that's not a bad way to think about it. A few years ago I created a framework called FIDES (Frameworks for the Integrated Design of Entrusted Systems) which actually looked at gradients of autonomy.
Because True autonomy becomes just hand waving since we can't define that for humans AND humans have different levels of autonomy. (Army example: a new private has much less autonomy than a special operations team lead)
So what we'd do is determine the minimum level of autonomy and intelligence needed (man or machine) and then determine the maximum levels we'd allow or trust it to have.
If you didn't have an overlap then you didn't have an entrusted design space and you'd have to reconcile the requirements either reducing the level of autonomy / intelligence or increasing what you'd allow.
Sorry that's a long answer but you actually kind of nailed the idea of levels and gradients. Kind of like the layers of AI we talked about before:
Yeah, just the sort of spectrum I was considering. I think everything needs to be gradients, as you describe them, not "yes or no" to the questions we really can't answer.
Now, all we have to do is get everyone to agree on a standardized way to measure these things! I'm sure that won't take more than it took for SI to be introduced.
It almost sounds as though you're suggesting (or leading the reader in this direction) that whether something is autonomous isn't really useful, but maybe degrees of how autonomous a thing is, makes more sense. This seems like a framework that could be more useful for us: instead of a Turing Test, which could theoretically never ever be passed for the reasons you suggest here, we could describe system A as "level 3 autonomous" or something, and system B as "level 4 autonomous" and so on.
Don't ask me for any details, though! I'm more of an "idea guy."
I throw an idea out there and run away quickly when the going gets tough.
Good launching board for thoughtful conversations here, Michael! I hope others take this opportunity.
Honestly, that's not a bad way to think about it. A few years ago I created a framework called FIDES (Frameworks for the Integrated Design of Entrusted Systems) which actually looked at gradients of autonomy.
Because True autonomy becomes just hand waving since we can't define that for humans AND humans have different levels of autonomy. (Army example: a new private has much less autonomy than a special operations team lead)
So what we'd do is determine the minimum level of autonomy and intelligence needed (man or machine) and then determine the maximum levels we'd allow or trust it to have.
If you didn't have an overlap then you didn't have an entrusted design space and you'd have to reconcile the requirements either reducing the level of autonomy / intelligence or increasing what you'd allow.
Sorry that's a long answer but you actually kind of nailed the idea of levels and gradients. Kind of like the layers of AI we talked about before:
https://www.polymathicbeing.com/p/the-layers-of-ai
Yeah, just the sort of spectrum I was considering. I think everything needs to be gradients, as you describe them, not "yes or no" to the questions we really can't answer.
Now, all we have to do is get everyone to agree on a standardized way to measure these things! I'm sure that won't take more than it took for SI to be introduced.
See you in a few thousand years?