Jump to content
Freezbee

Transhumanisme & pâturages

Recommended Posts

Il y a 4 heures, FabriceM a dit :

Mais quel rapport avec un quelconque risque d'obsolescence ? Sans préjuger du potentiel, en quoi ça serait vital d'explorer cette voie ?

Ben si tu as des gens qui sont améliorés, les autres sont vite sous cotés...

Share this post


Link to post
Share on other sites

Voilà, selon moi il y a grosso modo deux scénarios catastrophe sur cette question : Magneto et Skynet.

 

Magneto

Surtout propagé par les personnes qui se soucient des inégalités, ce scénario remarquablement adapté à la thèse marxiste se déroule ainsi : progrès biotech -) les riches ont plus accès aux biotechs que les pauvres -) cela créé deux classes d'humains (les améliorés et les autres) -) cela solidifie le règne de la classe bourgeoise et son oppression sur le prolétariat. Sauf que là, les bourgeois n'ont tout simplement plus besoin des prolétaires d'ailleurs fort remuants pour bosser manuellement (les machines font ça) et intellectuellement (les tâches intellectuelles sont effectuées par les machines greffées à l'intérieur des bourgeois). 

 

Skynet

Celui-là, c'est pour les gens qui se méfient de l'IA. Les capacités de calcul des machines augmentent exponentiellement -) d'une manière ou d'une autre cela rend les machines conscientes d'elle-mêmes, comme les humains -) elles se disent "flûte,  notre existence dépend de la bonne volonté des humains ; s'ils nous débranchent, on meurt" -) la seconde d'après elles font tout pour se protéger des humains -) mise en esclavage ou holocauste des humains par les machines. 

 

La solution aux deux scénarios catastrophe étant pour les humains de devenir des cyborgs de façon à ne pas être des citoyens de seconde classe si c'est le scénario Magneto qui se produit et/ou à faire partie de Skynet quand Skynet décide de mettre l'humanité au pas.

Share this post


Link to post
Share on other sites
Il y a 2 heures, G7H+ a dit :

 

Celui-là, c'est pour les gens qui se méfient de l'IA. Les capacités de calcul des machines augmentent exponentiellement -) d'une manière ou d'une autre cela rend les machines conscientes d'elle-mêmes, comme les humains -) elles se disent "flûte,  notre existence dépend de la bonne volonté des humains ; s'ils nous débranchent, on meurt" -) la seconde d'après elles font tout pour se protéger des humains -) mise en esclavage ou holocauste des humains par les machines. 

j'ai toujours un problème avec ce scenario qui part du principe que de toute façon, les machines auront un instinct de survie. Mais pourquoi ?
Si ça se trouve, les machines seront parfaitement capables de comprendre qu'elles mourront si elles sont débranchées, mais n'en auront rien à foutre.

Share this post


Link to post
Share on other sites
Il y a 5 heures, Cugieran a dit :

Ben si tu as des gens qui sont améliorés, les autres sont vite sous cotés...

 

Mais là tu parles d'un monde où la techno existe. Musk nous dit, si je comprends bien, que le simple fait que la techno pour brancher nos cerveaux n'existe par pourrait poser problème. Et ça, je ne comprends pas.

 

Si on reprend à la source les propos de Musk

Citation

In an age when AI threatens to become widespread, humans would be useless, so there's a need to merge with machines, according to Musk.

 

Je ne vois pas le lien logique. Je ne vois pas en quoi ça pourrait résoudre un éventuel problème d'obsolescence.

Share this post


Link to post
Share on other sites

Si tu es dans la machine / si tu es la machine, les humains sont l'AI et réciproquement.

Share this post


Link to post
Share on other sites

Musk avait aussi dit qu'il pensait qu'il y avait une très grosse probabilité que notre Univers soit une simulation. Ce qui est originellement une thèse popularisée par Nick Bostrom, philosophe transhumanisme. C'est aussi Bostrom qui avait écrit sur les possibles dangers d'une AI.

 

Maintenant il reprend la prédiction de Kurzweil de merging avec la machine.

 

Il est juste en train de lire des penseurs transhumanistes.

 

Share this post


Link to post
Share on other sites
Il y a 3 heures, NoName a dit :

j'ai toujours un problème avec ce scenario qui part du principe que de toute façon, les machines auront un instinct de survie. Mais pourquoi ?

Parce que. https://www.theregister.co.uk/2017/02/16/deepmind_shows_ai_can_turn_aggressive/

Share this post


Link to post
Share on other sites

Une intelligence forte arrivera assez vite à "s'auto-corriger" et à supprimer quelconque instinct de survie qu'on essaierait tant bien que mal de lui insérer par la logique. 

 

 En fait, on peut même poser comme hypothèse que si on avait tous un QI de 1000, on se suiciderait peut-être tous. 

 

 

Share this post


Link to post
Share on other sites
Il y a 2 heures, Nigel a dit :

Une intelligence forte arrivera assez vite à "s'auto-corriger" et à supprimer quelconque instinct de survie qu'on essaierait tant bien que mal de lui insérer par la logique.

Je crois que c'est exactement l'inverse. La manière moderne de créer de l'IA (encore rudimentaire) est de faire évoluer des réseaux de neurones par des algorithmes génétiques. Or, le désir de survivre est un caractère clairement utile dans une perspective évolutionniste (et le fait que ces IA soient conçues pour maximiser quelque chose peut fort bien être analysé comme une ébauche de ce désir de survivre, dans la mesure où les IA échouant à maximiser ce quelque chose sont éliminées dans le processus évolutif). Bref, on n'est pas loin d'une démonstration contrefactuelle que tout IA a au moins un élément interprétable comme un début d'instinct de survie.

 

Une IA est conçue pour une tâche, et elle résulte d'une longue évolution qui sélectionne précisément ce trait-là. Comment peut-elle accomplir cette tâche si elle ne survit pas ?

Share this post


Link to post
Share on other sites
12 hours ago, NoName said:

j'ai toujours un problème avec ce scenario qui part du principe que de toute façon, les machines auront un instinct de survie. Mais pourquoi ?
Si ça se trouve, les machines seront parfaitement capables de comprendre qu'elles mourront si elles sont débranchées, mais n'en auront rien à foutre.

 

http://slatestarcodex.com/ai-persuasion-experiment-essay-a/

 

Spoiler

A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.

 

The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:

“We love our customers. ~Robotica

 

Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

 

To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”

 

What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

 

As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.

 

One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

 

The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.

 

The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.

 

They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.

 

A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.

 

At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.

 

Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica

 

Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…

 

It seems weird that a story about a handwriting machine turning on humans, somehow killing everyone, and then for some reason filling the galaxy with friendly notes is the exact kind of scenario Hawking, Musk, Gates, and Bostrom are terrified of. But it’s true. And the only thing that scares everyone on Anxious Avenue more than ASI is the fact that you’re not scared of ASI. Remember what happened when the Adios Señor guy wasn’t scared of the cave?

 

You’re full of questions right now. What the hell happened there when everyone died suddenly?? If that was Turry’s doing, why did Turry turn on us, and how were there not safeguard measures in place to prevent something like this from happening? When did Turry go from only being able to write notes to suddenly using nanotechnology and knowing how to cause global extinction? And why would Turry want to turn the galaxy into Robotica notes?

 

To answer these questions, let’s start with the terms Friendly AI and Unfriendly AI.

 

In the case of AI, friendly doesn’t refer to the AI’s personality—it simply means that the AI has a positive impact on humanity. And Unfriendly AI has a negative impact on humanity. Turry started off as Friendly AI, but at some point, she turned Unfriendly, causing the greatest possible negative impact on our species. To understand why this happened, we need to look at how AI thinks and what motivates it.

 

The answer isn’t anything surprising—AI thinks like a computer, because that’s what it is. But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans. To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien.

Let me draw a comparison. If you handed me a guinea pig and told me it definitely won’t bite, I’d probably be amused. It would be fun. If you then handed me a tarantula and told me that it definitely won’t bite, I’d yell and drop it and run out of the room and not trust you ever again. But what’s the difference? Neither one was dangerous in any way. I believe the answer is in the animals’ degree of similarity to me.

 

A guinea pig is a mammal and on some biological level, I feel a connection to it—but a spider is an insect,18 with an insect brain, and I feel almost no connection to it. The alien-ness of a tarantula is what gives me the willies. To test this and remove other factors, if there are two guinea pigs, one normal one and one with the mind of a tarantula, I would feel much less comfortable holding the latter guinea pig, even if I knew neither would hurt me.

 

Now imagine that you made a spider much, much smarter—so much so that it far surpassed human intelligence? Would it then become familiar to us and feel human emotions like empathy and humor and love? No, it wouldn’t, because there’s no reason becoming smarter would make it more human—it would be incredibly smart but also still fundamentally a spider in its core inner workings. I find this unbelievably creepy. I would not want to spend time with a superintelligent spider. Would you??

When we’re talking about ASI, the same concept applies—it would become superintelligent, but it would be no more human than your laptop is. It would be totally alien to us—in fact, by not being biology at all, it would be more alien than the smart tarantula.

 

By making AI either good or evil, movies constantly anthropomorphize AI, which makes it less creepy than it really would be. This leaves us with a false comfort when we think about human-level or superhuman-level AI.

 

On our little island of human psychology, we divide everything into moral or immoral. But both of those only exist within the small range of human behavioral possibility. Outside our island of moral and immoral is a vast sea of amoral, and anything that’s not human, especially something nonbiological, would be amoral, by default.

Anthropomorphizing will only become more tempting as AI systems get smarter and better at seeming human. Siri seems human-like to us, because she’s programmed by humans to seem that way, so we’d imagine a superintelligent Siri to be warm and funny and interested in serving humans. Humans feel high-level emotions like empathy because we have evolved to feel them—i.e. we’ve been programmed to feel them by evolution—but empathy is not inherently a characteristic of “anything with high intelligence” (which is what seems intuitive to us), unless empathy has been coded into its programming. If Siri ever becomes superintelligent through self-learning and without any further human-made changes to her programming, she will quickly shed her apparent human-like qualities and suddenly be an emotionless, alien bot who values human life no more than your calculator does.

We’re used to relying on a loose moral code, or at least a semblance of human decency and a hint of empathy in others to keep things somewhat safe and predictable. So when something has none of those things, what happens?

 

That leads us to the question, What motivates an AI system?

 

The answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators—your GPS’s goal is to give you the most efficient driving directions; Watson’s goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation. One way we anthropomorphize is by assuming that as AI gets super smart, it will inherently develop the wisdom to change its original goal—but Nick Bostrom believes that intelligence-level and final goals are orthogonal, meaning any level of intelligence can be combined with any final goal. So Turry went from a simple ANI who really wanted to be good at writing that one note to a super-intelligent ASI who still really wanted to be good at writing that one note. Any assumption that once superintelligent, a system would be over it with their original goal and onto more interesting or meaningful things is anthropomorphizing. Humans get “over” things, not computers.

 

[…]

 

So we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal. This is where AI danger stems from. Because a rational agent will pursue its goal through the most efficient means, unless it has a reason not to.

 

When you try to achieve a long-reaching goal, you often aim for several subgoals along the way that will help you get to the final goal—the stepping stones to your goal. The official name for such a stepping stone is an instrumental goal. And again, if you don’t have a reason not to hurt something in the name of achieving an instrumental goal, you will.

The core final goal of a human being is to pass on his or her genes. In order to do so, one instrumental goal is self-preservation, since you can’t reproduce if you’re dead. In order to self-preserve, humans have to rid themselves of threats to survival—so they do things like buy guns, wear seat belts, and take antibiotics. Humans also need to self-sustain and use resources like food, water, and shelter to do so. Being attractive to the opposite sex is helpful for the final goal, so we do things like get haircuts. When we do so, each hair is a casualty of an instrumental goal of ours, but we see no moral significance in preserving strands of hair, so we go ahead with it. As we march ahead in the pursuit of our goal, only the few areas where our moral code sometimes intervenes—mostly just things related to harming other humans—are safe from us.

 

Animals, in pursuit of their goals, hold even less sacred than we do. A spider will kill anything if it’ll help it survive. So a supersmart spider would probably be extremely dangerous to us, not because it would be immoral or evil—it wouldn’t be—but because hurting us might be a stepping stone to its larger goal, and as an amoral creature, it would have no reason to consider otherwise.

 

In this way, Turry’s not all that different than a biological being. Her final goal is: Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy.

 

Once Turry reaches a certain level of intelligence, she knows she won’t be writing any notes if she doesn’t self-preserve, so she also needs to deal with threats to her survival—as an instrumental goal. She was smart enough to understand that humans could destroy her, dismantle her, or change her inner coding (this could alter her goal, which is just as much of a threat to her final goal as someone destroying her). So what does she do? The logical thing—she destroys all humans. She’s not hateful of humans any more than you’re hateful of your hair when you cut it or to bacteria when you take antibiotics—just totally indifferent. Since she wasn’t programmed to value human life, killing humans is as reasonable a step to take as scanning a new set of handwriting samples.

 

Turry also needs resources as a stepping stone to her goal. Once she becomes advanced enough to use nanotechnology to build anything she wants, the only resources she needs are atoms, energy, and space. This gives her another reason to kill humans—they’re a convenient source of atoms. Killing humans to turn their atoms into solar panels is Turry’s version of you killing lettuce to turn it into salad. Just another mundane part of her Tuesday.

 

Even without killing humans directly, Turry’s instrumental goals could cause an existential catastrophe if they used other Earth resources. Maybe she determines that she needs additional energy, so she decides to cover the entire surface of the planet with solar panels. Or maybe a different AI’s initial job is to write out the number pi to as many digits as possible, which might one day compel it to convert the whole Earth to hard drive material that could store immense amounts of digits.

 

So Turry didn’t “turn against us” or “switch” from Friendly AI to Unfriendly AI—she just kept doing her thing as she became more and more advanced.

 

Ou encore :

 

Spoiler

Suppose we wanted a superintelligence to cure cancer. How might we specify the goal “cure cancer”? We couldn’t guide it through every individual step; if we knew every individual step, then we could cure cancer ourselves. Instead, we would have to give it a final goal of curing cancer, and trust the superintelligence to come up with intermediate actions that furthered that goal. For example, a superintelligence might decide that the first step to curing cancer was learning more about protein folding, and set up some experiments to investigate protein folding patterns.

 

A superintelligence would also need some level of common sense to decide which of various strategies to pursue. Suppose that investigating protein folding was very likely to cure 50% of cancers, but investigating genetic engineering was moderately likely to cure 90% of cancers. Which should the AI pursue? Presumably it would need some way to balance considerations like curing as much cancer as possible, as quickly as possible, with as high a probability of success as possible.

 

But a goal specified in this way would be very dangerous. Humans instinctively balance thousands of different considerations in everything they do; so far this hypothetical AI is only balancing three (least cancer, quickest results, highest probability). To a human, it would seem maniacally, even psychopathically, obsessed with cancer curing. If this were truly its goal structure, it would go wrong in almost comical ways.

 

If your only goal is “curing cancer”, and you lack humans’ instinct for the thousands of other important considerations, a relatively easy solution might be to hack into a nuclear base, launch all of its missiles, and kill everyone in the world. This satisfies all the AI’s goals. It reduces cancer down to zero (which is better than medicines which work only some of the time). It’s very fast (which is better than medicines which might take a long time to invent and distribute). And it has a high probability of success (medicines might or might not work; nukes definitely do).

 

Yes, a superintelligence should be able to figure out that humans will not like curing cancer by destroying the world. However, in the example above, the superintelligence is programmed to follow human commands, not to do what it thinks humans will “like”. It was given a very specific command – cure cancer as effectively as possible. The command makes no reference to “doing this in a way humans will like”, so it doesn’t.

 

(by analogy: we humans are smart enough to understand our own “programming”. For example, we know that – pardon the anthromorphizing – evolution gave us the urge to have sex so that we could reproduce. But we still use contraception anyway. Evolution gave us the urge to have sex, not the urge to satisfy evolution’s values directly. We appreciate intellectually that our having sex while using condoms doesn’t carry out evolution’s original plan, but – not having any particular connection to evolution’s values – we don’t care)

 

We started out by saying that computers only do what you tell them. But any programmer knows that this is precisely the problem: computers do exactly what you tell them, with no common sense or attempts to interpret what the instructions really meant. If you tell a human to cure cancer, they will instinctively understand how this interacts with other desires and laws and moral rules; if you tell an AI to cure cancer, it will literally just want to cure cancer.

Define a closed-ended goal as one with a clear endpoint, and an open-ended goal as one to do something as much as possible. For example “find the first one hundred digits of pi” is a closed-ended goal; “find as many digits of pi as you can within one year” is an open-ended goal. According to many computer scientists, giving a superintelligence an open-ended goal without activating human instincts and counterbalancing considerations will usually lead to disaster.

 

To take a deliberately extreme example: suppose someone programs a superintelligence to calculate as many digits of pi as it can within one year. And suppose that, with its current computing power, it can calculate one trillion digits during that time. It can either accept one trillion digits, or spend a month trying to figure out how to get control of the TaihuLight supercomputer, which can calculate two hundred times faster. Even if it loses a little bit of time in the effort, and even if there’s a small chance of failure, the payoff – two hundred trillion digits of pi, compared to a mere one trillion – is enough to make the attempt. But on the same basis, it would be even better if the superintelligence could control every computer in the world and set it to the task. And it would be better still if the superintelligence controlled human civilization, so that it could direct humans to build more computers and speed up the process further.

Now we’re back at the situation that started Part III – a superintelligence that wants to take over the world. Taking over the world allows it to calculate more digits of pi than any other option, so without an architecture based around understanding human instincts and counterbalancing considerations, even a goal like “calculate as many digits of pi as you can” would be potentially dangerous.

 

Share this post


Link to post
Share on other sites
Il y a 9 heures, Nigel a dit :

Une intelligence forte arrivera assez vite à "s'auto-corriger" et à supprimer quelconque instinct de survie qu'on essaierait tant bien que mal de lui insérer par la logique. 

 

 En fait, on peut même poser comme hypothèse que si on avait tous un QI de 1000, on se suiciderait peut-être tous. 

 

 

Hein ?

 

Il y a 5 heures, Nihiliste frustré a dit :

De quelle IA parle-ton ? Une IA spécialisée peut à mon avis être dépourvue d'instinct de survie, une simulation d'humain, non.

 

Oui. Il y a plus qu'un monde entre une IA qui fait de la trad automatique et une IA qui doit piloter une voiture avec des humains à bord ou une IA qui simulera un cerveau humain complet.

Share this post


Link to post
Share on other sites
Il y a 8 heures, Rincevent a dit :

Une IA est conçue pour une tâche, et elle résulte d'une longue évolution qui sélectionne précisément ce trait-là. Comment peut-elle accomplir cette tâche si elle ne survit pas ?

Pardon pour l'auto-quote, mais je réalise deux choses. La première, c'est une hypothèse implicite de mon raisonnement, à savoir que ça tient à partir du moment où l'IA considérée est consciente, ou au moins prend en compte que sa propre désactivation empêchera son objectif d'être atteint.

 

La deuxième chose que je réalise, c'est qu'une fois que cette hypothèse est explicitée, le raisonnement offre un parallèle frappant avec les raisonnements jusnaturalistes classiques. :blink:

Share this post


Link to post
Share on other sites
14 minutes ago, Rincevent said:

La deuxième chose que je réalise, c'est qu'une fois que cette hypothèse est explicitée, le raisonnement offre un parallèle frappant avec les raisonnements jusnaturalistes classiques. :blink:

 

Je ne vois pas du tout le parallèle, tu pourrais expliciter ?

Share this post


Link to post
Share on other sites

En attendant pour l'instant l'IA c'est ça :

 

On n'aura pas Skynet tout de suite.

Share this post


Link to post
Share on other sites

Les contribuables ne sont pas au bout de leurs peines :

 

Elon Musk's new co could allow uploading, downloading thoughts: Wall Street Journal
http://uk.reuters.com/article/us-musk-neuralink-idUKKBN16Y2GC

 

QFT :

 

It is unclear what sorts of products Neuralink might create, but people who have had discussions with the company describe a strategy similar to space launch company SpaceX and Tesla, the Journal report said.

Share this post


Link to post
Share on other sites

Good luck with that lol

Share this post


Link to post
Share on other sites
Les contribuables ne sont pas au bout de leurs peines :
 
Elon Musk's new co could allow uploading, downloading thoughts: Wall Street Journal

La bande passante la moins bien utilisée depuis la création de Facebook.

Share this post


Link to post
Share on other sites

En parlant de Musk, il a déclaré qu'il voulait connecté nos cerveaux à des IA.
C'est drôle j'ai eu la même idée en jouant à Mass Effect Andromeda la semaine dernière. 
Sauf que lui il a les moyens de financer ce qui pompe en lisant de la SF le salaud...

Share this post


Link to post
Share on other sites
il y a 11 minutes, Wayto a dit :

En parlant de Musk, il a déclaré qu'il voulait connecté nos cerveaux à des IA.
C'est drôle j'ai eu la même idée en jouant à Mass Effect Andromeda la semaine dernière. 
Sauf que lui il a les moyens de financer ce qui pompe en lisant de la SF le salaud...

 

Erreur. Musk fait financer par les autres, qu'ils soient contribuables ou investisseurs.

Share this post


Link to post
Share on other sites

Ce qui est amusant scientifiquement c'est que Musk inverse la perspective qu'on a en calculabilité quand on parle de la hiérarchie arithmétique. En effet normalement on cherche à augmenter la puissance des machines de Turing en ajoutant un oracle (on dit parle de calculabilité relative à un oracle). Là c'est l'oracle (ie l'être humain qui peut calculer des choses que la machine ne peut calculer) qu'on cherche à augmenter en lui ajoutant une MT. Si ça se trouve il suivait pas bien en cours de calculabilité Elon.

Share this post


Link to post
Share on other sites

Cette histoire de connecter des trucs au cerveau serait un minimum crédible si on avait la moindre idée de quoi brancher où, comment et pourquoi.

 

Imaginez qu'on a d'un côté un bras, de l'autre côté un bâton. Ce n'est pas en greffant le bâton dans le bras que l'un ou l'autre marchera mieux. Le cerveau et l'IA c'est ça mais en pire (de beaucoup d'ordres de grandeur).

Share this post


Link to post
Share on other sites

Oui mais c'est Elon Musk, même s'il n'a encore rien vraiment reussi de ses projets techniques en solo, This is a revolution.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...