OpenAI blog post: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation
Orange discuss: https://news.ycombinator.com/item?id=39207291
I don’t have any particular section to call out. May post thoughts tomorrow today it’s after midnight oh gosh, but wanted to post since I knew ya’ll’d be interested in this.
Terrorists could use autocorrect according to OpenAI! Discuss!
I caught a whiff of that stuff in the HN comments, along with something called “Solomonoff induction”, which I’d never heard of, and the Wiki page for which has a huge-ass “low quality article” warning: https://en.wikipedia.org/wiki/Solomonoff’s_theory_of_inductive_inference.
It does sound like that current AI hype has crested, so it’s time to hype the next one, where all these models will be unified somehow and start thinking for themselves.
Solomonoff induction is a big rationalist buzzword. It’s meant to be the platonic ideal of bayesian reasoning which if implemented would be the best deducer in the world and get everything right.
It would be cool if you could build this, but it’s literally impossible. The induction method is provably incomputable.
The hope is that if you build a shitty approximation to solomonoff induction that “approaches” it, it will perform close to the perfect solomonoff machine. Does this work? Not really.
My metaphor is that it’s like coming to a river you want to cross, and being like “Well Moses, the perfect river crosser, parted the water with his hands, so if I just splash really hard I’ll be able to get across”. You aren’t Moses. Build a bridge.
it’s very worrying how crowded Wikipedia has been getting with computer pseudoscience shit, all of which has a distinct stench to it (it fucking sucks to dig into a seemingly novel CS approach and find out the article you’re reading is either marketing or the unpublishable fantasies of the deranged) but none of which seems to get pruned from the wiki, presumably because proving it’s bullshit needs specialist knowledge, and specialists are frequently outpaced by the motivated deranged folks who originate articles on topics like these
for Solomonoff induction specifically, the vast majority of the article very much feels like an attempt by rationalists to launder a pseudoscientific concept into the mainstream. the Turing machines section, the longest one in the article, reads like a D-quality technical writing paper. the citations are very sparse and not even in Wikipedia’s format, it waffles on forever about the basic definition of an algorithm and how inductive Turing machines are “better” because they can be used to implement algorithms (big whoop) followed by a bunch of extremely dense, nonsensical technobabble:
Note that only simple inductive Turing machines have the same structure (but different functioning semantics of the output mode) as Turing machines. Other types of inductive Turing machines have an essentially more advanced structure due to the structured memory and more powerful instructions. Their utilization for inference and learning allows achieving higher efficiency and better reflects learning of people (Burgin and Klinger, 2004).
utter crank shit. I dug a bit deeper and found that the super-recursive algorithms article is from the same source (it’s the same rambling voice and improper citations), and it seems to go even further off the deep end.
Taking a look at Super-recursive algorithm, and wow…
Examples of super-recursive algorithms include […] evolutionary computers, which use DNA to produce the value of a function
This reads like early-1990s conference proceedings out of the Santa Fe Institute, as seen through bong water. (There’s a very specific kind of weird, which I can best describe as “physicists have just discovered that the subject of information theory exists”. Wolfram’s A New Kind[-]Of Science was a late-arriving example of it.)