Aller au contenu

Transhumanisme & pâturages


Messages recommandés

Voilà, selon moi il y a grosso modo deux scénarios catastrophe sur cette question : Magneto et Skynet.

 

Magneto

Surtout propagé par les personnes qui se soucient des inégalités, ce scénario remarquablement adapté à la thèse marxiste se déroule ainsi : progrès biotech -) les riches ont plus accès aux biotechs que les pauvres -) cela créé deux classes d'humains (les améliorés et les autres) -) cela solidifie le règne de la classe bourgeoise et son oppression sur le prolétariat. Sauf que là, les bourgeois n'ont tout simplement plus besoin des prolétaires d'ailleurs fort remuants pour bosser manuellement (les machines font ça) et intellectuellement (les tâches intellectuelles sont effectuées par les machines greffées à l'intérieur des bourgeois). 

 

Skynet

Celui-là, c'est pour les gens qui se méfient de l'IA. Les capacités de calcul des machines augmentent exponentiellement -) d'une manière ou d'une autre cela rend les machines conscientes d'elle-mêmes, comme les humains -) elles se disent "flûte,  notre existence dépend de la bonne volonté des humains ; s'ils nous débranchent, on meurt" -) la seconde d'après elles font tout pour se protéger des humains -) mise en esclavage ou holocauste des humains par les machines. 

 

La solution aux deux scénarios catastrophe étant pour les humains de devenir des cyborgs de façon à ne pas être des citoyens de seconde classe si c'est le scénario Magneto qui se produit et/ou à faire partie de Skynet quand Skynet décide de mettre l'humanité au pas.

Lien vers le commentaire
Il y a 2 heures, G7H+ a dit :

 

Celui-là, c'est pour les gens qui se méfient de l'IA. Les capacités de calcul des machines augmentent exponentiellement -) d'une manière ou d'une autre cela rend les machines conscientes d'elle-mêmes, comme les humains -) elles se disent "flûte,  notre existence dépend de la bonne volonté des humains ; s'ils nous débranchent, on meurt" -) la seconde d'après elles font tout pour se protéger des humains -) mise en esclavage ou holocauste des humains par les machines. 

j'ai toujours un problème avec ce scenario qui part du principe que de toute façon, les machines auront un instinct de survie. Mais pourquoi ?
Si ça se trouve, les machines seront parfaitement capables de comprendre qu'elles mourront si elles sont débranchées, mais n'en auront rien à foutre.

Lien vers le commentaire
Il y a 5 heures, Cugieran a dit :

Ben si tu as des gens qui sont améliorés, les autres sont vite sous cotés...

 

Mais là tu parles d'un monde où la techno existe. Musk nous dit, si je comprends bien, que le simple fait que la techno pour brancher nos cerveaux n'existe par pourrait poser problème. Et ça, je ne comprends pas.

 

Si on reprend à la source les propos de Musk

Citation

In an age when AI threatens to become widespread, humans would be useless, so there's a need to merge with machines, according to Musk.

 

Je ne vois pas le lien logique. Je ne vois pas en quoi ça pourrait résoudre un éventuel problème d'obsolescence.

Lien vers le commentaire

Musk avait aussi dit qu'il pensait qu'il y avait une très grosse probabilité que notre Univers soit une simulation. Ce qui est originellement une thèse popularisée par Nick Bostrom, philosophe transhumanisme. C'est aussi Bostrom qui avait écrit sur les possibles dangers d'une AI.

 

Maintenant il reprend la prédiction de Kurzweil de merging avec la machine.

 

Il est juste en train de lire des penseurs transhumanistes.

 

Lien vers le commentaire
Il y a 3 heures, NoName a dit :

j'ai toujours un problème avec ce scenario qui part du principe que de toute façon, les machines auront un instinct de survie. Mais pourquoi ?

Parce que. https://www.theregister.co.uk/2017/02/16/deepmind_shows_ai_can_turn_aggressive/

Lien vers le commentaire

Une intelligence forte arrivera assez vite à "s'auto-corriger" et à supprimer quelconque instinct de survie qu'on essaierait tant bien que mal de lui insérer par la logique. 

 

 En fait, on peut même poser comme hypothèse que si on avait tous un QI de 1000, on se suiciderait peut-être tous. 

 

 

Lien vers le commentaire
Il y a 2 heures, Nigel a dit :

Une intelligence forte arrivera assez vite à "s'auto-corriger" et à supprimer quelconque instinct de survie qu'on essaierait tant bien que mal de lui insérer par la logique.

Je crois que c'est exactement l'inverse. La manière moderne de créer de l'IA (encore rudimentaire) est de faire évoluer des réseaux de neurones par des algorithmes génétiques. Or, le désir de survivre est un caractère clairement utile dans une perspective évolutionniste (et le fait que ces IA soient conçues pour maximiser quelque chose peut fort bien être analysé comme une ébauche de ce désir de survivre, dans la mesure où les IA échouant à maximiser ce quelque chose sont éliminées dans le processus évolutif). Bref, on n'est pas loin d'une démonstration contrefactuelle que tout IA a au moins un élément interprétable comme un début d'instinct de survie.

 

Une IA est conçue pour une tâche, et elle résulte d'une longue évolution qui sélectionne précisément ce trait-là. Comment peut-elle accomplir cette tâche si elle ne survit pas ?

  • Yea 1
Lien vers le commentaire
12 hours ago, NoName said:

j'ai toujours un problème avec ce scenario qui part du principe que de toute façon, les machines auront un instinct de survie. Mais pourquoi ?
Si ça se trouve, les machines seront parfaitement capables de comprendre qu'elles mourront si elles sont débranchées, mais n'en auront rien à foutre.

 

http://slatestarcodex.com/ai-persuasion-experiment-essay-a/

 

Spoiler

A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.

 

The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:

“We love our customers. ~Robotica

 

Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

 

To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”

 

What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

 

As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.

 

One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

 

The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.

 

The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.

 

They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.

 

A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.

 

At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.

 

Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica

 

Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…

 

It seems weird that a story about a handwriting machine turning on humans, somehow killing everyone, and then for some reason filling the galaxy with friendly notes is the exact kind of scenario Hawking, Musk, Gates, and Bostrom are terrified of. But it’s true. And the only thing that scares everyone on Anxious Avenue more than ASI is the fact that you’re not scared of ASI. Remember what happened when the Adios Señor guy wasn’t scared of the cave?

 

You’re full of questions right now. What the hell happened there when everyone died suddenly?? If that was Turry’s doing, why did Turry turn on us, and how were there not safeguard measures in place to prevent something like this from happening? When did Turry go from only being able to write notes to suddenly using nanotechnology and knowing how to cause global extinction? And why would Turry want to turn the galaxy into Robotica notes?

 

To answer these questions, let’s start with the terms Friendly AI and Unfriendly AI.

 

In the case of AI, friendly doesn’t refer to the AI’s personality—it simply means that the AI has a positive impact on humanity. And Unfriendly AI has a negative impact on humanity. Turry started off as Friendly AI, but at some point, she turned Unfriendly, causing the greatest possible negative impact on our species. To understand why this happened, we need to look at how AI thinks and what motivates it.

 

The answer isn’t anything surprising—AI thinks like a computer, because that’s what it is. But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans. To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien.

Let me draw a comparison. If you handed me a guinea pig and told me it definitely won’t bite, I’d probably be amused. It would be fun. If you then handed me a tarantula and told me that it definitely won’t bite, I’d yell and drop it and run out of the room and not trust you ever again. But what’s the difference? Neither one was dangerous in any way. I believe the answer is in the animals’ degree of similarity to me.

 

A guinea pig is a mammal and on some biological level, I feel a connection to it—but a spider is an insect,18 with an insect brain, and I feel almost no connection to it. The alien-ness of a tarantula is what gives me the willies. To test this and remove other factors, if there are two guinea pigs, one normal one and one with the mind of a tarantula, I would feel much less comfortable holding the latter guinea pig, even if I knew neither would hurt me.

 

Now imagine that you made a spider much, much smarter—so much so that it far surpassed human intelligence? Would it then become familiar to us and feel human emotions like empathy and humor and love? No, it wouldn’t, because there’s no reason becoming smarter would make it more human—it would be incredibly smart but also still fundamentally a spider in its core inner workings. I find this unbelievably creepy. I would not want to spend time with a superintelligent spider. Would you??

When we’re talking about ASI, the same concept applies—it would become superintelligent, but it would be no more human than your laptop is. It would be totally alien to us—in fact, by not being biology at all, it would be more alien than the smart tarantula.

 

By making AI either good or evil, movies constantly anthropomorphize AI, which makes it less creepy than it really would be. This leaves us with a false comfort when we think about human-level or superhuman-level AI.

 

On our little island of human psychology, we divide everything into moral or immoral. But both of those only exist within the small range of human behavioral possibility. Outside our island of moral and immoral is a vast sea of amoral, and anything that’s not human, especially something nonbiological, would be amoral, by default.

Anthropomorphizing will only become more tempting as AI systems get smarter and better at seeming human. Siri seems human-like to us, because she’s programmed by humans to seem that way, so we’d imagine a superintelligent Siri to be warm and funny and interested in serving humans. Humans feel high-level emotions like empathy because we have evolved to feel them—i.e. we’ve been programmed to feel them by evolution—but empathy is not inherently a characteristic of “anything with high intelligence” (which is what seems intuitive to us), unless empathy has been coded into its programming. If Siri ever becomes superintelligent through self-learning and without any further human-made changes to her programming, she will quickly shed her apparent human-like qualities and suddenly be an emotionless, alien bot who values human life no more than your calculator does.

We’re used to relying on a loose moral code, or at least a semblance of human decency and a hint of empathy in others to keep things somewhat safe and predictable. So when something has none of those things, what happens?

 

That leads us to the question, What motivates an AI system?

 

The answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators—your GPS’s goal is to give you the most efficient driving directions; Watson’s goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation. One way we anthropomorphize is by assuming that as AI gets super smart, it will inherently develop the wisdom to change its original goal—but Nick Bostrom believes that intelligence-level and final goals are orthogonal, meaning any level of intelligence can be combined with any final goal. So Turry went from a simple ANI who really wanted to be good at writing that one note to a super-intelligent ASI who still really wanted to be good at writing that one note. Any assumption that once superintelligent, a system would be over it with their original goal and onto more interesting or meaningful things is anthropomorphizing. Humans get “over” things, not computers.

 

[…]

 

So we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal. This is where AI danger stems from. Because a rational agent will pursue its goal through the most efficient means, unless it has a reason not to.

 

When you try to achieve a long-reaching goal, you often aim for several subgoals along the way that will help you get to the final goal—the stepping stones to your goal. The official name for such a stepping stone is an instrumental goal. And again, if you don’t have a reason not to hurt something in the name of achieving an instrumental goal, you will.

The core final goal of a human being is to pass on his or her genes. In order to do so, one instrumental goal is self-preservation, since you can’t reproduce if you’re dead. In order to self-preserve, humans have to rid themselves of threats to survival—so they do things like buy guns, wear seat belts, and take antibiotics. Humans also need to self-sustain and use resources like food, water, and shelter to do so. Being attractive to the opposite sex is helpful for the final goal, so we do things like get haircuts. When we do so, each hair is a casualty of an instrumental goal of ours, but we see no moral significance in preserving strands of hair, so we go ahead with it. As we march ahead in the pursuit of our goal, only the few areas where our moral code sometimes intervenes—mostly just things related to harming other humans—are safe from us.

 

Animals, in pursuit of their goals, hold even less sacred than we do. A spider will kill anything if it’ll help it survive. So a supersmart spider would probably be extremely dangerous to us, not because it would be immoral or evil—it wouldn’t be—but because hurting us might be a stepping stone to its larger goal, and as an amoral creature, it would have no reason to consider otherwise.

 

In this way, Turry’s not all that different than a biological being. Her final goal is: Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy.

 

Once Turry reaches a certain level of intelligence, she knows she won’t be writing any notes if she doesn’t self-preserve, so she also needs to deal with threats to her survival—as an instrumental goal. She was smart enough to understand that humans could destroy her, dismantle her, or change her inner coding (this could alter her goal, which is just as much of a threat to her final goal as someone destroying her). So what does she do? The logical thing—she destroys all humans. She’s not hateful of humans any more than you’re hateful of your hair when you cut it or to bacteria when you take antibiotics—just totally indifferent. Since she wasn’t programmed to value human life, killing humans is as reasonable a step to take as scanning a new set of handwriting samples.

 

Turry also needs resources as a stepping stone to her goal. Once she becomes advanced enough to use nanotechnology to build anything she wants, the only resources she needs are atoms, energy, and space. This gives her another reason to kill humans—they’re a convenient source of atoms. Killing humans to turn their atoms into solar panels is Turry’s version of you killing lettuce to turn it into salad. Just another mundane part of her Tuesday.

 

Even without killing humans directly, Turry’s instrumental goals could cause an existential catastrophe if they used other Earth resources. Maybe she determines that she needs additional energy, so she decides to cover the entire surface of the planet with solar panels. Or maybe a different AI’s initial job is to write out the number pi to as many digits as possible, which might one day compel it to convert the whole Earth to hard drive material that could store immense amounts of digits.

 

So Turry didn’t “turn against us” or “switch” from Friendly AI to Unfriendly AI—she just kept doing her thing as she became more and more advanced.

 

Ou encore :

 

Spoiler

Suppose we wanted a superintelligence to cure cancer. How might we specify the goal “cure cancer”? We couldn’t guide it through every individual step; if we knew every individual step, then we could cure cancer ourselves. Instead, we would have to give it a final goal of curing cancer, and trust the superintelligence to come up with intermediate actions that furthered that goal. For example, a superintelligence might decide that the first step to curing cancer was learning more about protein folding, and set up some experiments to investigate protein folding patterns.

 

A superintelligence would also need some level of common sense to decide which of various strategies to pursue. Suppose that investigating protein folding was very likely to cure 50% of cancers, but investigating genetic engineering was moderately likely to cure 90% of cancers. Which should the AI pursue? Presumably it would need some way to balance considerations like curing as much cancer as possible, as quickly as possible, with as high a probability of success as possible.

 

But a goal specified in this way would be very dangerous. Humans instinctively balance thousands of different considerations in everything they do; so far this hypothetical AI is only balancing three (least cancer, quickest results, highest probability). To a human, it would seem maniacally, even psychopathically, obsessed with cancer curing. If this were truly its goal structure, it would go wrong in almost comical ways.

 

If your only goal is “curing cancer”, and you lack humans’ instinct for the thousands of other important considerations, a relatively easy solution might be to hack into a nuclear base, launch all of its missiles, and kill everyone in the world. This satisfies all the AI’s goals. It reduces cancer down to zero (which is better than medicines which work only some of the time). It’s very fast (which is better than medicines which might take a long time to invent and distribute). And it has a high probability of success (medicines might or might not work; nukes definitely do).

 

Yes, a superintelligence should be able to figure out that humans will not like curing cancer by destroying the world. However, in the example above, the superintelligence is programmed to follow human commands, not to do what it thinks humans will “like”. It was given a very specific command – cure cancer as effectively as possible. The command makes no reference to “doing this in a way humans will like”, so it doesn’t.

 

(by analogy: we humans are smart enough to understand our own “programming”. For example, we know that – pardon the anthromorphizing – evolution gave us the urge to have sex so that we could reproduce. But we still use contraception anyway. Evolution gave us the urge to have sex, not the urge to satisfy evolution’s values directly. We appreciate intellectually that our having sex while using condoms doesn’t carry out evolution’s original plan, but – not having any particular connection to evolution’s values – we don’t care)

 

We started out by saying that computers only do what you tell them. But any programmer knows that this is precisely the problem: computers do exactly what you tell them, with no common sense or attempts to interpret what the instructions really meant. If you tell a human to cure cancer, they will instinctively understand how this interacts with other desires and laws and moral rules; if you tell an AI to cure cancer, it will literally just want to cure cancer.

Define a closed-ended goal as one with a clear endpoint, and an open-ended goal as one to do something as much as possible. For example “find the first one hundred digits of pi” is a closed-ended goal; “find as many digits of pi as you can within one year” is an open-ended goal. According to many computer scientists, giving a superintelligence an open-ended goal without activating human instincts and counterbalancing considerations will usually lead to disaster.

 

To take a deliberately extreme example: suppose someone programs a superintelligence to calculate as many digits of pi as it can within one year. And suppose that, with its current computing power, it can calculate one trillion digits during that time. It can either accept one trillion digits, or spend a month trying to figure out how to get control of the TaihuLight supercomputer, which can calculate two hundred times faster. Even if it loses a little bit of time in the effort, and even if there’s a small chance of failure, the payoff – two hundred trillion digits of pi, compared to a mere one trillion – is enough to make the attempt. But on the same basis, it would be even better if the superintelligence could control every computer in the world and set it to the task. And it would be better still if the superintelligence controlled human civilization, so that it could direct humans to build more computers and speed up the process further.

Now we’re back at the situation that started Part III – a superintelligence that wants to take over the world. Taking over the world allows it to calculate more digits of pi than any other option, so without an architecture based around understanding human instincts and counterbalancing considerations, even a goal like “calculate as many digits of pi as you can” would be potentially dangerous.

 

Lien vers le commentaire
Il y a 9 heures, Nigel a dit :

Une intelligence forte arrivera assez vite à "s'auto-corriger" et à supprimer quelconque instinct de survie qu'on essaierait tant bien que mal de lui insérer par la logique. 

 

 En fait, on peut même poser comme hypothèse que si on avait tous un QI de 1000, on se suiciderait peut-être tous. 

 

 

Hein ?

 

Il y a 5 heures, Nihiliste frustré a dit :

De quelle IA parle-ton ? Une IA spécialisée peut à mon avis être dépourvue d'instinct de survie, une simulation d'humain, non.

 

Oui. Il y a plus qu'un monde entre une IA qui fait de la trad automatique et une IA qui doit piloter une voiture avec des humains à bord ou une IA qui simulera un cerveau humain complet.

  • Yea 1
Lien vers le commentaire
Il y a 8 heures, Rincevent a dit :

Une IA est conçue pour une tâche, et elle résulte d'une longue évolution qui sélectionne précisément ce trait-là. Comment peut-elle accomplir cette tâche si elle ne survit pas ?

Pardon pour l'auto-quote, mais je réalise deux choses. La première, c'est une hypothèse implicite de mon raisonnement, à savoir que ça tient à partir du moment où l'IA considérée est consciente, ou au moins prend en compte que sa propre désactivation empêchera son objectif d'être atteint.

 

La deuxième chose que je réalise, c'est qu'une fois que cette hypothèse est explicitée, le raisonnement offre un parallèle frappant avec les raisonnements jusnaturalistes classiques. :blink:

Lien vers le commentaire
14 minutes ago, Rincevent said:

La deuxième chose que je réalise, c'est qu'une fois que cette hypothèse est explicitée, le raisonnement offre un parallèle frappant avec les raisonnements jusnaturalistes classiques. :blink:

 

Je ne vois pas du tout le parallèle, tu pourrais expliciter ?

Lien vers le commentaire
  • 1 month later...

Les contribuables ne sont pas au bout de leurs peines :

 

Elon Musk's new co could allow uploading, downloading thoughts: Wall Street Journal
http://uk.reuters.com/article/us-musk-neuralink-idUKKBN16Y2GC

 

QFT :

 

It is unclear what sorts of products Neuralink might create, but people who have had discussions with the company describe a strategy similar to space launch company SpaceX and Tesla, the Journal report said.

Lien vers le commentaire
Les contribuables ne sont pas au bout de leurs peines :
 
Elon Musk's new co could allow uploading, downloading thoughts: Wall Street Journal

La bande passante la moins bien utilisée depuis la création de Facebook.
Lien vers le commentaire

En parlant de Musk, il a déclaré qu'il voulait connecté nos cerveaux à des IA.
C'est drôle j'ai eu la même idée en jouant à Mass Effect Andromeda la semaine dernière. 
Sauf que lui il a les moyens de financer ce qui pompe en lisant de la SF le salaud...

Lien vers le commentaire
il y a 11 minutes, Wayto a dit :

En parlant de Musk, il a déclaré qu'il voulait connecté nos cerveaux à des IA.
C'est drôle j'ai eu la même idée en jouant à Mass Effect Andromeda la semaine dernière. 
Sauf que lui il a les moyens de financer ce qui pompe en lisant de la SF le salaud...

 

Erreur. Musk fait financer par les autres, qu'ils soient contribuables ou investisseurs.

Lien vers le commentaire

Ce qui est amusant scientifiquement c'est que Musk inverse la perspective qu'on a en calculabilité quand on parle de la hiérarchie arithmétique. En effet normalement on cherche à augmenter la puissance des machines de Turing en ajoutant un oracle (on dit parle de calculabilité relative à un oracle). Là c'est l'oracle (ie l'être humain qui peut calculer des choses que la machine ne peut calculer) qu'on cherche à augmenter en lui ajoutant une MT. Si ça se trouve il suivait pas bien en cours de calculabilité Elon.

Lien vers le commentaire

Cette histoire de connecter des trucs au cerveau serait un minimum crédible si on avait la moindre idée de quoi brancher où, comment et pourquoi.

 

Imaginez qu'on a d'un côté un bras, de l'autre côté un bâton. Ce n'est pas en greffant le bâton dans le bras que l'un ou l'autre marchera mieux. Le cerveau et l'IA c'est ça mais en pire (de beaucoup d'ordres de grandeur).

Lien vers le commentaire
  • 2 years later...
Citation

Scientist With ALS Dr. Peter Scott-Morgan Set To Become 'World's Very First Full Cyborg'

 

Dr. Peter Scott-Morgan was diagnosed with motor neuron disease, also known as ALS or Lou Gehrig's Disease, two years ago, and today he is still fighting to thrive, not just to survive. This October, Dr. Scott-Morgan is well on the way to becoming the world’s very first full cyborg, potentially giving him more years of life.

 

It was in 2017 when Dr. Peter Scott-Morgan was diagnosed with ALS, the degenerative motor neuron disease that would eventually paralyze his entire body except for his eyes. The diagnosis is understandably grim especially since he was given only two years left to live, but the author, roboticist, and speaker did not give up the fight but instead chose to face it head on and even treat it as an opportunity for breakthrough research.

 

By teaming up with world-class organizations with AI expertise, Dr. Scott-Morgan is turning into what he calls the “world’s very first full Cyborg.”

 

“And when I say ‘Cyborg’, I don’t just mean any old cyborg, you understand, but by far the most advanced human cybernetic organism ever created in 13.8 billion years,” says Dr. Scott-Morgan, further stating that body his body and brain will be “irreversibly changed.”

 

According to Dr. Scott-Morgan, he would become part hardware and part “wetware” since his five senses will be enhanced and all his external persona will be electronic. What’s more, the change would not be a one-off, but one with upgrades and updates.

 

“I’ve got more upgrades in progress than Microsoft,” Dr. Scott-Morgan notes.

 

The Cyborg Artist courtesy of DXC Technology has also stored Dr. Scott-Morgan’s digital footprint, which will then allow him to still create works of art even though he is already paralyzed.

 

“Peter’s first theme is 'metamorphosis.' The resulting work of art is a unique blend of AI and human that captures Peter’s creative and emotional self — a critical aspect of what it means to be human,” said DXC Technology.

 

This October, Dr. Scott-Morgan will be undergoing what he calls the final procedure that would turn him into a Full Cyborg. In fact, last Oct. 9, he tweeted a photo of himself, saying that it is his last post as Peter 1.0. In that procedure, Dr. Scott-Morgan will give up his physical voice to prevent saliva from getting into his lungs, potentially giving him many more years of life.

 

Incidentally, the procedure is being done during the month pointed out to Dr. Scott-Morgan as the month when statistically he would be dead.

 

 

Lien vers le commentaire

@cedric.org Oui, ça m'a laissé sur ma faim et j'ai hésité à le poster. Il y a aussi un site web, mais qui n'est constitué que de déclarations d'intention. Rien de concret malheureusement : http://www.scott-morgan.com/blog/next-generation-think-tank/research-streams/overview-of-8-streams/

 

 

Lien vers le commentaire
  • 1 month later...

Une question primordiale !

 

Citation

Researchers Try to Craft the Perfect Boob Using Eye-Tracking Technology

 

Researchers are using eye-tracking technology to see what people look at when they evaluate boobs in an attempt to create a "universal scale" for breast aesthetics and symmetry.

 

1575327122718-Screen-Shot-2019-12-02-at-

 

Let's get two things out of the way: First of all, there are a lot of reasons to get a boob job, none of which are anyone else's business. Second, the perfect boob does not exist.

 

But if you were a plastic surgeon hoping to be the Michelangelo of one person's idealized breasts, it would help to have a shared language of what's aesthetically important. Most plastic surgeons accomplish this over the course of several consultations, talking to the patient about what will make them happy.

 

In an attempt to improve this process, a team of researchers in Poland used eye-tracking technology to see what parts of the boob people looked at when assessing the symmetry and relative attractiveness of breasts. What they found was that what people notice most are the nipples and the underboob.

 

The study analyzed the gazes of 50 men and 50 women, using eye-tracking technology as they looked at images of breasts. The study makes no mention of sexual preferences or gender identities of the participants beyond "Caucasian" and "male or female," but does note that they're all from a similar cultural background. The study is published in the December issue of Plastic and Reconstructive Surgery.

 

"Terms such as 'beauty' or 'aesthetics' are subjective and thus poorly defined and understood," the study's lead author, Piotr Pietruski, told Motherboard. "Due to this fact, both aesthetic and reconstructive breast surgery suffer from the lack of a standardized method of postoperative results analysis... Eye-tracking technology enables quantitative analysis of observer's visual perception of specific stimuli, such as comprehension of breast aesthetics and symmetry."

 

The researchers presented participants with images of all sorts of computer-generated boobs—Saggy ones, perky ones, at various cup sizes—and asked them to evaluate them from 1, or "poor," to 10, or "excellent." The images in the study are all of white skin tones, with a similar, slim build.

 

 

1575382569181-Original00006534-201912000

 

If the subject's gaze lingered on any part for longer than 100 milliseconds, the researchers counted it as intentional. They placed the points where people most often looked onto a map of the breasts, and revealed that people most often looked at the nipple-areola area—which shouldn't be surprising, as our eyes are drawn toward areas of contrast.

 

In the paper, the researchers acknowledge that following people's eyes doesn't actually tell them much about what people find attractive—just what people focus on.

It's often difficult for surgeons and patients to agree on what makes a good boob job, the paper says, because patients come from a variety of cultural backgrounds that influence their preference. More studies on larger scales, with more diverse participants are needed, the researchers say.

But they foresee uses for a "universal scale" for evaluating breast aesthetics, to help patients and surgeons better communicate. Pietruski said that in the future, an AI could use something like the findings of their work, and the map they were able to draw of the breast, to automatically evaluate boobs.

 

"Personally, I believe that the most important potential application of eye-tracking technology could be the development of an artificial intelligence-based algorithm for the analysis of various body regions' attractiveness," Pietruski told Motherboard.

 

Again, there are many reasons why someone wants or needs to get a boob job, and any tool that could make that person more satisfied with the result of a plastic or reconstructive surgery is obviously good. A universally recognized "attractive" boob can't be determined if we just track enough eyeballs. Even if we surveyed every human on the planet and created the average boob based on that data, that boob won't necessarily satisfy every patient.

 

Because ultimately there's no such thing as the ideal boob, only the boob that makes you happy.

 

  • Yea 1
  • Haha 1
Lien vers le commentaire
  • 1 year later...

Créer un compte ou se connecter pour commenter

Vous devez être membre afin de pouvoir déposer un commentaire

Créer un compte

Créez un compte sur notre communauté. C’est facile !

Créer un nouveau compte

Se connecter

Vous avez déjà un compte ? Connectez-vous ici.

Connectez-vous maintenant
×
×
  • Créer...