Links & What I've Been Reading Feb/March 2023
As GPT4 launches, what comes next? Here are a few links that guide the way
With the launch of GPT4 yesterday, the truism that AI moves fast has never felt stronger than this week:

Another case in point: last time round in High Leverage we considered the coming rise of AI tutors. Barely a month later, established players like Khan Academy are already using GPT4 to ship these products:
These are already impressive steps, but this is just the start of the widespread productisation of AI.
To help guide you on the way, here are the best weird blog posts that I’ve read from the last month. This time round it’s pretty much pure AI so I’ve ordered them in terms of accessibility:
Perhaps It Is A Bad Thing That The World's Leading AI Companies Cannot Control Their AIs
Histrionic Bing/Sydney (who truly earned the nickname ChatBPD)’s early alignment issues seem even worse, given we now know it was a test run of GPT-4
Bing Chat is blatantly, aggressively misaligned, on LessWrong
Similar to the above, AI alignment nerds at LessWrong get into the details on Bing’s bad behaviour
The inside story of how Chat GPT was built, by MIT Tech Review
Great snippets from insiders behind the world’s fastest growing tech product, ever
Power and Weirdness: How to Use Bing AI, by Ethan Mollick
Ethan is leading the charge as a Wharton professor on how to integrate foundation models into his day to day as a knowledge worker. More great tips here
Planning for AGI and Beyond, by OpenAI
OpenAI intends to use powerful AGI … to help us solve AGI alignment. That’s a circular argument if I’ve ever heard one
The race dynamics that are emerging here are seriously worrying
Should GPT exist? By Scott Aaronson at Shtetl-Optimized
Former theoretical computer scientist, part time philosopher, and full time OpenAI research intern, Scott always has interesting takes and writes well
The Waluigi Effect, by Anon on Less Wrong
Reading this gave me a headache but it’s worth it if you want to get deeper into the non-aligned behaviour of foundation models. This piece posits that training an AI to do something is likely to increase its odds of doing the exact opposite as well, the so called ‘Waluigi Effect’
Agnes Callard’s Marriage of the Minds, by The New Yorker
The only non AI piece this month. The New Yorker take a look at the unusual life of philosophy professor Agnes Callard, who wrote a famous work on ambition and the agency of becoming
Not quite as good as the New Yorker’s piece on UK politician Rory Stewart, which, in the author’s humble opinion, is one of the best short form bios ever written
I’m still experimenting with post styles for this newsletter. In the future expect:
Link dumps like this one
Company profiles / market maps around AI
Profiles in ambition
And occasional long form pieces / survival guides
Subscribe and share with your friends/colleagues, if you think they’d find this interesting. All it takes is a click 🙏