Message (1/33) :This was such a great conversation and event. We spoke for two hours and it felt like 15 minutes! Thank you so much for your intelligence and generous hospitality, @mhatta, @TasukuMizuno and @iuj_glocom https://t.co/oaphOv9Vq3
Message (3/33) :@kareem_carr @svpino @Grady_Booch The model could call the compiler tool, evaluate the code, and use the result for fine tuning of some sort. In regards to abduction, one could use scratchpad prompts like “list a few possible causes of X”. We need a paper measuring induction, deduction and abduction capabilities.
Message (4/33) :RT @sirbayes: The AI genie is out of the bottle. Regulation will be very hard. As Tyler says ‘I really don’t think any international cooper…
Message (5/33) :RT @AlphaSignalAI: This is big. The Retrieval Plugin allows ChatGPT to have a memory! The model can now remember information from convers…
Message (6/33) :@profgeraintrees interesting use of empower to mean give no new powers to… and interesting omission of the one regulator who has been producing guidance, the Information Commissioner. also note that the first guidance i’ve seen HSE produce on neural networks was 22 years ago - with what effect? https://t.co/nWWP5SwPUt
Message (7/33) :RT @ceobillionaire: $AGI The AGI Token Could Embody the Expected Future Value Created by AGI. AGI Nodes X AGI Agents = AGI Token ($AGI)…
Message (8/33) :RT @emilymbender: They then say: "AI research and development should be refocused on making today's powerful, state-of-the-art systems more…
Message (9/33) :RT @emilymbender: Next paragraph. Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella k…
Message (10/33) :RT @emilymbender: So, into the #AIhype. It starts with "AI systems with human-competitive intelligence can pose profound risks to society a…
Message (11/33) :They then say: "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal." >
Message (12/33) :Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources). >
Message (13/33) :Just sayin': We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about "too powerful AI". >
Message (14/33) :Next paragraph. Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the "Sparks paper" and OpenAI's non-technical ad copy for GPT4. ROFLMAO. >
Message (15/33) :And could folks "understand" these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we'd be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes. >
Message (16/33) :RT @MontrealAIPFP: $AGI The AGI Token Could Embody the Expected Future Value Created by AGI. AGI Nodes X AGI Agents = AGI Token ($AGI) h…
Message (17/33) :This is interesting to me because FLI is strongly associated with Longtermism, and Longtermists have been using a line of reasoning that boils down to “we must build bigger AI systems to understand how to control them”. OpenAI was created initially to do this. There seems to be a gradual shift in the narrative wherein EA and Longtermists are starting to vilify OpenAI.
Message (18/33) :RT @marktenenholtz: GPT4All: A new 7B LLM based on LLaMa. The model is trained on 800k completions from GPT-3.5-Turbo. They released: •…
Message (19/33) :* self-modifying, as when a neural system changes its weights according to the new things it learns. ** non-holonomic, as when the future state of a system is a factor not just of current state but also of the path whereby it reached that current state.
Message (20/33) :Super exciting to see OpenFlamingo released! Finally an open-source version of DeepMind's Flamingo. This model is trained with @StabilityAI support and compute. Comes with all sorts of goodies, like a multi-model dataset, which is needed for the future! https://t.co/DKZu4cx715https://t.co/dzrEZpHT8Whttps://t.co/dlV0WkD2l6
Message (21/33) :The explosion of new startup apps powered by large language models often have similar functionality as Apple Siri, Amazon Alexa, and Google Assistant, but with novel features. The playing field will be leveled further as open-source LLMs start infiltrating the startup ecosystem. https://t.co/gZCpcAkFpw
Message (22/33) :I miss how much generative models used to make me laugh with their batshit crazy text and images. I think we’ve lost something important. It’s like the world is a little bit more strip mall franchise restaurant.
Message (23/33) :RT @kareem_carr: @Grady_Booch A statistics perspective might be helpful here. LLM output probably does have some relationship to reality (…
Message (24/33) :@Grady_Booch A statistics perspective might be helpful here. LLM output probably does have some relationship to reality (since they are derived from a distribution of sentences that are related to reality) but that relationship is probably stochastic. https://t.co/2675VMgJhO
Message (26/33) :"I struggle to understand why the healthcare systems of the world’s richest countries are not yet using all the knowledge that science has provided to bring more help to.. millions of others living with pain." https://t.co/STqC1VB2qh by @lucy_os @Nature https://t.co/YynX6ATaLX
Message (27/33) :@SashaMTL "In the same way that English language emotion concepts have colonized psychology, #AI dominated by American-influenced image sources is producing a new visual monoculture of facial expressions."
Message (28/33) :@svpino @Grady_Booch I see your point but I think the new code is the equivalent of a hypothesis in science that needs to be validated through experiment. The programmer still needs to verify the code runs and that the outputs align with the intended purpose for it to be a “new truth”.
Message (29/33) :For folks interested in ML4Health jobs, my friend @venkmurthy is looking for a data scientist to work on exciting applications of machine learning to computational imaging in healthcare. https://t.co/O9deyYwS0B
Message (30/33) :There are two business models for chatbots. ChatGPT is freemium. Bing Chat has ads. (It's had product placement-like ad functionality from day one but doesn't seem to be enabled for most users yet. https://t.co/LqZO3MHyEe) I really really hope the freemium model doesn't go away. https://t.co/jeVebAYFPp
Message (31/33) :RT @Matt_Cagle: The only responsible standard for police use of face recognition is a total prohibition, and this new California bill moves…
Message (32/33) :RT @kate_saenko_: Only 7 women in this list of the top 100 computer scientists in the US. Women computer scientists, please do not get di…
Message (33/33) :RT @kate_saenko_: Only 7 women in this list of the top 100 computer scientists in the US. Women computer scientists, please do not get di…