OpenAI ha recentemente risposto alle accuse mosse contro di essa da Elon Musk, ex co-fondatore ora rivale, che ha intentato una causa legale accusando l’organizzazione di essersi discostata dai suoi principi originari e dalla sua natura di entità no-profit. Attraverso un comunicato ufficiale sul proprio blog, intitolato “OpenAI ed Elon Musk” e firmato da figure di spicco quali il presidente Greg Brockman, il capo scienziato Ilya Sutskever, i co-fondatori John Schulman e Wojciech Zaremba, insieme al CEO Sam Altman, OpenAI ha dichiarato di voler contestare fermamente le affermazioni di Musk.

La difesa di OpenAI si basa principalmente su una serie di email criptate che indicano come Musk non fosse contrario alla trasformazione di OpenAI in un’entità a scopo di lucro per finanziare la sua missione verso lo sviluppo dell’intelligenza artificiale generale (AGI), un tipo di intelligenza artificiale superiore a quella umana, come definito da Altman. Nonostante Musk abbia espresso perplessità sulla capacità dell’azienda di raccogliere i fondi necessari per competere con realtà come DeepMind e le divisioni di Google, sembra che fosse disposto a considerare l’idea di rendere OpenAI profittevole, suggerendo addirittura di incorporarla a Tesla per finanziarne la ricerca.

I cofondatori di OpenAI, tuttavia, non erano unanimi su questa proposta. Ciò nonostante, le comunicazioni trapelate mostrano che Musk era favorevole a un modello di business a scopo di lucro per OpenAI che gli avrebbe conferito maggiore controllo rispetto agli altri fondatori.

Inoltre, Ilya Sutskever ha offerto una nuova interpretazione del termine “aperto” nel nome dell’organizzazione. La causa intentata da Musk, giovedì 29 febbraio (giorno bisestile), lo vede accusare OpenAI, Altman e Brockman di aver violato l’atto costitutivo dell’ente, sostenendo che mantenere la progettazione di GPT-4 riservata e privata, disponibile solo a OpenAI e presumibilmente a Microsoft, costituisca una violazione dell’accordo. Finora, nessun documento ufficiale che attesti l’accordo costitutivo è stato presentato, ad eccezione di alcune prove tramite email.

OpenAI ha anche pubblicato una “Carta” sul suo blog nell’aprile 2018, che si impegna a garantire “benefici ampiamente distribuiti” dall’IA e dall’AGI, evitando però di citare espressamente termini come “open source” o “no-profit”.

Questa situazione solleva interrogativi sulla validità delle email come contratti e se ciò sarà sufficiente per respingere le rivendicazioni di Musk in tribunale. La risposta a questa questione si avrà probabilmente nel prossimo futuro.

Comunque, ecco le mail originali:

[1]
From: Elon Musk <>
To: Greg Brockman <>
CC: Sam Altman <>
Date: Sun, Nov 22, 2015 at 7:48 PM
Subject: follow up from call
Blog sounds good, assuming adjustments for neutrality vs being YC-centric.

I’d favor positioning the blog to appeal a bit more to the general public — there is a lot of value to having the public root for us to succeed — and then having a longer, more detailed and inside-baseball version for recruiting, with a link to it at the end of the general public version.

We need to go with a much bigger number than $100M to avoid sounding hopeless relative to what Google or Facebook are spending. I think we should say that we are starting with a $1B funding commitment. This is real. I will cover whatever anyone else doesn’t provide.

Template seems fine, apart from shifting to a vesting cash bonus as default, which can optionally be turned into YC or potentially SpaceX (need to understand how much this will be) stock.

[2]
From: Elon Musk <>
To: Ilya Sutskever <>, Greg Brockman <>
Date: Thu, Feb 1, 2018 at 3:52 AM
Subject: Fwd: Top AI institutions today
is exactly right. We may wish it otherwise, but, in my and ’s opinion, Tesla is the only path that could even hope to hold a candle to Google. Even then, the probability of being a counterweight to Google is small. It just isn’t zero.
Begin forwarded message:
From: <>
To: Elon Musk <>
Date: January 31, 2018 at 11:54:30 PM PST
Subject: Re: Top AI institutions today
Working at the cutting edge of AI is unfortunately expensive. For example <>

In addition to DeepMind, Google also has Google Brain, Research, and Cloud. And TensorFlow, TPUs, and they own about a third of all research (in fact, they hold their own AI conferences).

I also strongly suspect that compute horsepower will be necessary (and possibly even sufficient) to reach AGI. If historical trends are any indication, progress in AI is primarily driven by systems – compute, data, infrastructure. The core algorithms we use today have remained largely unchanged from the ~90s. Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated. Conversely, algorithmic advances alone are inert without the scale to also make them scary.

It seems to me that OpenAI today is burning cash and that the funding model cannot reach the scale to seriously compete with Google (an 800B company). If you can’t seriously compete but continue to do research in open, you might in fact be making things worse and helping them out “for free”, because any advances are fairly easy for them to copy and immediately incorporate, at scale.

A for-profit pivot might create a more sustainable revenue stream over time and would, with the current team, likely bring in a lot of investment. However, building out a product from scratch would steal focus from AI research, it would take a long time and it’s unclear if a company could “catch up” to Google scale, and the investors might exert too much pressure in the wrong directions.The most promising option I can think of, as I mentioned earlier, would be for OpenAI to attach to Tesla as its cash cow. I believe attachments to other large suspects (e.g. Apple? Amazon?) would fail due to an incompatible company DNA. Using a rocket analogy, Tesla already built the “first stage” of the rocket with the whole supply chain of Model 3 and its onboard computer and a persistent internet connection. The “second stage” would be a full self driving solution based on large-scale neural network training, which OpenAI expertise could significantly help accelerate. With a functioning full self-driving solution in ~2-3 years we could sell a lot of cars/trucks. If we do this really well, the transportation industry is large enough that we could increase Tesla’s market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale.

I cannot see anything else that has the potential to reach sustainable Google-scale capital within a decade.

[3]
From: Elon Musk <>
To: Ilya Sutskever <>, Greg Brockman <>
CC: Sam Altman <>, <>
Date: Wed, Dec 26, 2018 at 12:07 PM
Subject: I feel I should reiterate
My probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%. Not 1%. I wish it were otherwise.

Even raising several hundred million won’t be enough. This needs billions per year immediately or forget it.

Unfortunately, humanity’s future is in the hands of <>

And they are doing a lot more than this.

I really hope I’m wrong.

Elon

[4]
Fwd: congrats on the falcon 9
3 messages
From: Elon Musk <>
To: Sam Altman <>, Ilya Sutskever <>, Greg Brockman <>
Date: Sat, Jan 2, 2016 at 8:18 AM
Subject: Fwd: congrats on the falcon 9
Begin forwarded message:
From: <>
To: Elon Musk <>
Date: January 2, 2016 at 10:12:32 AM CST
Subject: congrats on the falcon 9
Hi Elon

Happy new year to you, !

Congratulations on landing the Falcon 9, what an amazing achievement. Time to build out the fleet now!

I’ve seen you (and Sam and other OpenAI people) doing a lot of interviews recently extolling the virtues of open sourcing AI, but I presume you realise that this is not some sort of panacea that will somehow magically solve the safety problem? There are many good arguments as to why the approach you are taking is actually very dangerous and in fact may increase the risk to the world. Some of the more obvious points are well articulated in this blog post, that I’m sure you’ve seen, but there are also other important considerations:
http://slatestarcodex.com/2015/12/17/should-ai-be-open/

I’d be interested to hear your counter-arguments to these points.

Best
From: Ilya Sutskever <>
To: Elon Musk <>, Sam Altman <>, Greg Brockman <>
Date: Sat, Jan 2, 2016 at 9:06 AM
Subject: Fwd: congrats on the falcon 9
The article is concerned with a hard takeoff scenario: if a hard takeoff occurs, and a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.

As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it’s totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

From: Elon Musk <>
To: Ilya Sutskever <>
Date: Sat, Jan 2, 2016 at 9:11 AM
Subject: Fwd: congrats on the falcon 9

Ecco i cinque punti più significativi emersi dalle e-mail:

  • Ilya Sutskever, lo scienziato capo di OpenAI, ha chiarito che il termine “aperto” in “OpenAI” non significa necessariamente “open source”. Ha spiegato che mentre l’obiettivo è far sì che tutti beneficino dell’IA una volta sviluppata, non è obbligatorio condividere tutta la scienza, anche se potrebbe essere una buona strategia nel breve termine.
  • Elon Musk ha concordato con l’idea che l’intelligenza artificiale non debba necessariamente essere open source. In risposta a Sutskever, che suggeriva che non condividere tutta la scienza potrebbe avere senso, Musk ha risposto con un semplice “Sì”.
  • Musk ha proposto l’idea di “associare” OpenAI a Tesla per ottenere fondi, paragonando l’azienda a una “mucca da mungere”. Ha espresso preoccupazione per la sostenibilità finanziaria di OpenAI e ha suggerito che collegarsi a Tesla potrebbe essere la soluzione migliore.
  • Si è discusso del passaggio di OpenAI da organizzazione no-profit a scopo di lucro già nel 2018. Anche se OpenAI rimane formalmente un’organizzazione no-profit, nel 2018 si è parlato di un possibile cambiamento a scopo di lucro per garantire un flusso di entrate più stabile nel tempo.
  • Sutskever era preoccupato per il rischio di un “decollo difficile”, in cui l’IA sicura potrebbe essere più difficile da sviluppare rispetto a quella non sicura, specialmente se l’IA non sicura fosse resa open source. Ha evidenziato il pericolo che qualcuno potesse sviluppare un’IA non sicura con facilità, portando a rischi significativi.

Di Fantasy