Jump to content

AIs (was Military Life for Civilian Gamers)


Joerg

Recommended Posts

  

4 hours ago, Lloyd Dupont said:

It's an interesting topic, and I got plenty of ideas and reply... But I fear, I can't express myself with clarity, nor is my ultimate vision clear itself, not that I care much about the direction our society will ultimately take either... And also, with almost every single sentence you raised many different topics! 😮
But since it is all an interesting topic

here are various arguments or ideas that are to be considered, I think....

- all life is not so different than machine. in some ways we are superior, we can reproduce with just a bit of dirt and water, auto repair, are much smarter (for now), but we are weak.... and perhaps one day machine will be smarter and replicating... 

Machine intelligence will be born with extelligence, something humanity only acquired gradually (whether as oral tradition, writing, or modern multimedial record-keeping).

It might already be possible to create machines with enough of a heuristic set of decision-making that could create copies of itself, at least if fed with sufficient amounts of basic resources, and to modify these copies, comparing performances and possibly elevating a variation as a new standard.

If such an entity was given the power of resource extraction, one would have designed a von Neumann-probe. Trying to deactivate that might trigger improvements that prevent deactivation.

Such an entity would only be partially material, and might well hide away virtual versions of it as backup against such interference.

So yes, let's try designing something like that. What could go wrong?

 

4 hours ago, Lloyd Dupont said:

- in a way why would have a machine less or more right than a chair? a cow? a pet cat? another human? it's really a matter of global social consensus (and also how much they can argue / defend their case)

Rights are something we assign to entities which get assigned something like dignity. (Unfortunately, we include corporations in such entities... also states, religious organizations, political organizations.)

The case of corporations shows that we are already beyond reserving rights for actual living beings.

Corporations are (not yet?) identified with material objects, only associated with such. We allow them to hold and control resources, ideas, cultural content, and we allow their assigned attorneys and directors to influence the lives of humans.

 

4 hours ago, Lloyd Dupont said:

I think the most likely proponent of robots rights will be isolated rich eccentric (not to worry) and corporations (they will own and produce the robots, robots rights benefits them, that could be  a worry)

UAVs are the present, already. Decisions are made by heuristics. Self-driving cars, or machine-optimized trains...

Such systems don't have anything like a consciousness, yet. Drones aren't part of the controlling heuristic's identity, if that heuristic got copied onto that drone. "Experiences" (i.e. weighted decision trees created while in operation or at least training that served to optimize performance) might still be reproducable.

They might still be something similar to our limbs, or the limbs of an octopus, if sufficient amounts of telemetry are fed back to an external system of much greater capacity controlling such units.

Such a system will have (programmed) values - biases what outcomes are to be achieved, what outcomes are to be avoided. And overrides to avoid situations where the system running public transport decides that all timetable problems go away if it doesn't let passengers enter or leave any more. (Similar decisions have already been made by humans running trams...)

 

Without a sense of identity and conciousness (if only at animal level), I find it hard to consider rights of an artificial item. Our definition of consciousness is linked to body experiences, a sense of "this is me" and "this is not me any more". (Admittedly, prosthetics, like e.g. automobiles, can become an extension of self. But then we are talking the rights of the organism extending into such a device rather than rights of the device itself.)

 

If we ever get to create virtual realities where isolated systems run a portion of the simulation, those isolated systems might have the chance to develop a sense of self, and possibly something resembling consciousness. Lifting such a system into operating machines interacting with the real world might pose a situation where machine rights may become a thing, alongside animal rights or ecosystem rights. Once we get around assigning such entities a dignity, we start talking about individual rights. Not there yet...

 

4 hours ago, Lloyd Dupont said:

- machine intelligence might be one day far ahead of all organic, and we might be lulled into anthropomorphism ("hey, alexa"), but it is the most alien of all intelligence. and even more alien that real alien since it has none of the evolutionary imperatives and drives of all organic lifeforms... and it can also be overturned at the flick of a button...

"Intelligence" is a hard to grasp term. And it may be independent of "consciousness".

 

4 hours ago, Lloyd Dupont said:

- are we going toward a more desirable future? I fear whether we agree on what is desirable or not, the future is decided by forces largely oblivious to ethics and moral concerns... sometimes for the better.. sometimes for the worse...

We are a tool-using culture, and we are constantly improving our tools. Letting a tractor with sufficient amounts of sensors and a good programmable heuristic tend a field with "smart agriculture" still is a tool.

Telling how it is excessive verbis

 

Link to comment
Share on other sites

Quote

 

The case of corporations shows that we are already beyond reserving rights for actual living beings.

 

 

 

The rights ascribed to corporations are recognised because the organisation is owned by people with rights, and composed of people with rights acting on their behalf. Things done by a corporation were recognised as being things done by or for people, or 'natural persons' in legal jargon. There are precedents going back to the Middle Ages in Europe, and circa 800 BC in India.

There have been thought experiments about automated software driven independent legal entities, which of course is a different matter, but it's not inevitable that a court would recognise such an entity as having any legal standing, at least in common law jurisdictions.

Simon Hibbs

Edited by simonh

Check out the Runequest Glorantha Wiki for RQ links and resources. Any updates or contributions welcome!

Link to comment
Share on other sites

I'm a firm member of the 'Life by definition is biological' school in this discussion. AFAIC, no matter how many emotions you program into a robot, no matter how attached you get to a robot, a robot is a thing not a living being. They're tools and as with all tools you use it till it breaks and then send to supply for a new one.

And I've felt this way since I read 'I, Robot' at 12 years old. There are several issues I see with anthromorphic robotics and AI generally. Here are a few:

- There are those who imagine the luxury of a companion bot, read 'fuck-bot', so they don't have to bother with the messy human connections that real relationships represent and I really think that this aspect of the discussion needs to be brought into the light more than it is.

- And just how much science fiction [as differentiated from space opera] has been written to warn us of removing the human equation from decision making and relationships? We seek to make ourselves gods by creating life in our own image, but can you honestly think of less fit to portray themselves as gods as homo sapiens?

- This comes down to a question of scientific ethics: It is not sufficient to say 'we did it because we could'. We as a society have to ask ourselves, 'What good comes of this? The question isn't can we do it, the question is should we do it'.

  • Like 1
Link to comment
Share on other sites

I am not sure where this discussion is going! ^_^ 

On your last point... it's not so clear, you imply you dont want to go towards some pseudo future some scifi warn us about. But 1. some might want it hey? 2. many times people were against something yet it happens.

For example, and here most example would be inherently very controversial... but I think I can pick a safe one. Nuclear weapons. Most sensible person would the world to get rid of them. Yet I suspect it's not going to happen. Worst still, I suspect the number of nuclear capable country is only going to grow (albeit very slowly).

Now, maybe I am a defeatist and it's totally worth thinking about it... sometimes that works too!
Basically my defeatist argument is money talk and ethic doesn't talk much...

 

Now to leave you with a completely different idea.... They had a short snippet about just that in New Scientist this week! 🙂 And I got another idea....
Not so long ago (allegedly, I think it was just an opinion column) people were concerned by the alarmingly growing number of family with pets. People will lost touch with other people. But now.. we moved on, nothing to see here...
Or, when I was a kid, I played D&D... OMG! There was some crazy kid committing suicide because his character died, or satanist sects were using D&D to recruit people! Nowadays D&D is fine, but OMG all those kid playing video games! They are losing touch with reality! 😮  (so am I, almost 50, still playing! 😄 )

Scifi and literature is here to shock and challenge you. But the reality is much more mundane and innocuous, I will predict....

Edited by Lloyd Dupont
  • Like 1
Link to comment
Share on other sites

I honestly don't know what the right course of action might be towards the ethical treatment of artificial intelligences. One of the problems with that is that we haven't the faintest idea what the intelligence or consciousness of such a thing might be like. We can't even unambiguously define consciousness.

One thing that muddies the waters a lot is the tendency to assume, mainly in fiction but out of it as well, that general intelligence and artificial consciousness are trivial. There are even people proposing that consciousness is already universal in some way, and the human tendency to develop emotional attachments to, and also see intelligence and intent in inanimate things is well documented. It doesn't take much to give someone the impression, or even convince them, that a computer script or animatronic toy is aware or has emotions. When Alphago beat the world Go champion there was a lot of speculation that this proved general AI was just around the corner. Alphago was an incredible achievement, but it's several orders of magnitude less sophisticated than the autonomic nervous system of a fruit fly. That's not a ding against Alphago, it's just that people forget fruit flies are the result of 4bn years of continuous optimisation through evolution.

  • Like 1

Check out the Runequest Glorantha Wiki for RQ links and resources. Any updates or contributions welcome!

Link to comment
Share on other sites

10 minutes ago, Lloyd Dupont said:

For example, and here most example would be inherently very controversial... but I think I can pick a safe one. Nuclear weapons.

I agree we may be in a similar position with AI. If say China has it and we don't (greater 'we', I'm a brit) it could give give them a potentially overwhelming advantage. Imagine a world run by a CCP AI that monitors, profiles and sanctions the behaviour of everyone, globally, 24/7 forever.

Quote

but OMG all those kid playing video games! They are losing touch with reality! 😮  (so am I, almost 50, still playing! 😄 )

Back in the 70s (maybe early 80s) someone asked Steve Jobs how he was going to overcome the fact that a lot of old people didn't want to learn how to use technology, and wasn't that an obstacle to universal adoption. He said that eventually death would solve that problem for him. Eventually we'll all be gamers.

Edited by simonh
  • Like 1

Check out the Runequest Glorantha Wiki for RQ links and resources. Any updates or contributions welcome!

Link to comment
Share on other sites

OK, we've covered a lot in 30 minutes, so I'll run down @Lloyd Dupont and @simonh comments, hopefully in order. Wish me luck 😉

My last about scientific ethics...

- We're living in a world nowadays where science is proceeding at a breakneck pace. We have parsed the genome to the point that we are now picking out the hair color and eye color of our children. Well, the wealthy are anyway. Biologists and geneticists predict that physical immortality is within sight. Scientists state that a full body clone is now possible, however it still requires a womb. They claim that they'll solve that 'problem' within 15 years. We're advancing on the AI front at breakneck speed, hoping to develop a machine intelligence. Some roboticists believe that we'll have AI's automating city services without human input within 20 years. And the question that philosophers and ethicists are asking is, "Is this good for us?" And I can honestly see a danger in society where we devalue our humanity for sake of 'efficiency'. That is the essence of my comment.

As far as nuclear weapons go...

- Nukes are the ultimate deterrent, and I wholeheartedly agree with Pres. Truman's decision to drop Fat Man and Little Boy on Japan. OTOH, when I was 19 years old, I had a front-tank seat for what very well could have been the apocalypse. I was serving in the US 11th Armored Cavalry Regiment patrolling the Inter-German border [aka, The Iron Curtain] when the Able Archer exercise damned near went, um, 'tango uniform'. The whole Regiment lit a full on, for-real War Alert and we rolled out of our cantonments and headed for the border, our tanks fully loaded with live ammo. It was the only time in my life I'd ever seen an M1 Abrams with a complete load of war shots and it scared me to death. It literally scared the piss out of me. I must have had to pee 15 times between the time the NCO woke me up and the time we rolled out of the kaserne. So nukes are a little bit personal to me. I don't support proliferation, but you can't keep science in a bag. Once somebody figures out how to do something, anybody can do it.

Gaming as a hobby...

- When I was a teenager, I had to pass picket lines of evangelicals protesting my hobby shop carrying 'Dungeons and Dragons' and 'Traveller'. I put up with the nerd accusations in the Army [and let's face it, in the 80's gamers WERE nerds]. But gaming literally kept me from joining a cult. One of my NCOs was in to one of the Biblical literalist faiths called The Way and I almost bought into it. I'd dropped a few bucks for one or two of their indoctrination sessions, was attending meetings. Then said NCO saw me at the PX book store looking at some gaming stuff, 'Star Frontiers' IIRC, and started trying to pressure and embarrass me in public about it. "You could be such a powerful voice for God if you'd just get rid of this stuff and focus on your faith", he says. Well, I'd had to put up with THAT shit once already in my life. If The Way couldn't accept me as I am, then I wanted nothing to do with them. Those two incidents were, I think, the first two major moral and ethical decisions I'd made on my own.

Regarding Jobs and the 'all we have to do is wait for them to die out' argument...

- So, I'm a historian. I'm a Civil War reenactor in a liberal Western US State. I routinely get up in front of high school students and talk about race politics in America. No pressure, right? 🤣😂 But what I'm seeing lately is this tidal wave of revisionism and attempts to rewrite the history of the Western world to fit only ONE narrative... the liberal social democrat narrative... where most of the accomplishments of the West were due to exploitation, colonialism, and slavery. Where white Christian society has contributed nothing but misery to the other peoples of the world. Now let me be clear here: - I AM NOT A RACIST NOR AM I PUSHING A RACIST AGENDA - My comments about the Left wising to rewrite the historical record to one where all Caucasians share a guilt for any conceivable harm done to anyone of color are a matter of record. My push-back is three fold, a] you should not judge historical people or events by the whatever standard is currently en vogue in your lifetime, they should be judged by their own words and the words of those around them; b] human beings are a conflict ridden species. The simple basic fact is that humans have always and will always divide the universe into to camps, Us and Them. And whenever it's put up for a vote, almost nobody votes for 'Them'. And c] the Left constantly states that the evils in our societies are endemic and 'have always been there'. This is not true. The Western world has made incredible social progress in just my lifetime. For just one example, I was born the year after Kennedy was shot. I was four when Martin Luther King Jr. was shot. That man was slain for having the unmitigated gall of trying to get Black people to vote, as is their right. But just 10 years ago, America had its first Black President. What's more, he won by a landslide both times he ran for the office. And when his term was over he turned over power, peacefully and in full accordance with the law, to a man who was demonstrably less fit for the office than he was.

 

OK fellas, this is about all the mental gymnastics I can do at 0200. I'll get back to you tomorrow.

Link to comment
Share on other sites

I don't think we're anywhere close to general AI. I'm in my 50s so "in my lifetime" doesn't have the range it once had, but I'm not even sure we'll have it in my kid's lifetimes.

The more we investigate it, the more we realise it's an unbelievably complex and difficult problem, and the fact is we don't even have a general, high level roadmap for even starting to think about building one. We've got zip in terms of concepts for an architecture. All we have is pattern matching, with no understanding of the actual nature of the images and data being classified. While that's an important cognitive tool we use, it seems like it's a long way from being enough to build an actual conscious intelligence.

I think as we learn more about what general intelligence really is, how it works and what consciousness is, we'll have a much better grasp of the implications and how to deal with them.

Quote

OK fellas, this is about all the mental gymnastics I can do at 0200. I'll get back to you tomorrow.

Have a good night. I'm just about to start my second morning cup of tea.

 
  • Like 2

Check out the Runequest Glorantha Wiki for RQ links and resources. Any updates or contributions welcome!

Link to comment
Share on other sites

This 

1 hour ago, simonh said:

I don't think we're anywhere close to general

not even close...
Though you never know.. but it will be more luck than told you so...

It's easy to guess, when they say
 

2 hours ago, svensson said:

They claim that they'll solve that 'problem' within 15 years

anything more than 10 years is usually wishful thinking.... and even that!

Link to comment
Share on other sites

We can only predict how long it will take when we have a plan for designing and implementing one. In the absence of that, 15 years or any number is just hearsay.

1960s Herbert Simmons predicts "Machines will be capable, within 20 years, of doing any work a man can do."

1993 - Vernor Vinge predicts super-intelligent AIs 'within 30 years'.

2011 ray Kurzweil predicts the singularity (enabled by super-intelligent AIs) will occur by 2045, 34 years after the prediction was made.

So the distance into the future before we achieve strong AI and hence the singularity is, according to it's most optimistic proponents, receding by more than 1 year per year. So I reckon when we get to 2045 strong AI optimists will be predicting it's on the slate to be achieved by about 2090.

EDIT: I don't think there's anything unsolvable about it, I'm firmly in the materialist camp. I just think it's an incredibly hard, complex problem we currently have no conceptual framework for solving.

Edited by simonh
  • Like 1
  • Haha 1

Check out the Runequest Glorantha Wiki for RQ links and resources. Any updates or contributions welcome!

Link to comment
Share on other sites

1 hour ago, simonh said:

EDIT: I don't think there's anything unsolvable about it, I'm firmly in the materialist camp. I just think it's an incredibly hard, complex problem we currently have no conceptual framework for solving.

On the (possibly dreaded by some) technoptimist camp.. I'd like to add, I read somewhere recently something along the line that they were about to simulate a fruit fly brain full neuron network completely (as in, biological neuron behavior)... so getting there (by copying existing template..)

Link to comment
Share on other sites

On 4/28/2021 at 3:52 PM, Lloyd Dupont said:

On the (possibly dreaded by some) technoptimist camp.. I'd like to add, I read somewhere recently something along the line that they were about to simulate a fruit fly brain full neuron network completely (as in, biological neuron behavior)... so getting there (by copying existing template..)

We need to be trying stuff like that, but we don't actually know for sure how biological neurons work. We have various ideas, but cells are incredibly complex systems and almost everything about them is in some kind of feedback loop with everything else. So even simulating single cells is a challenge as we don't understand all the mechanisms. Having a solid crack at this sort of stuff is one way to try and figure it out though.

Edited by simonh
  • Like 1

Check out the Runequest Glorantha Wiki for RQ links and resources. Any updates or contributions welcome!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...