Beep boop - this is a robot. A new show has been posted to TWiT…
What are your thoughts about today’s show? We’d love to hear from you!
Beep boop - this is a robot. A new show has been posted to TWiT…
What are your thoughts about today’s show? We’d love to hear from you!
Brilliant quote from Stephen Wolfram (@0:07:20):
The analogy that I’ve found useful, that sort of comes out of some science I’ve done in this area is let’s imagine that your task is to build a wall. Well, one way you can do that is you make these very precisely engineered bricks and you set them up in this very kind of organized way and you build this wall and you can keep building it, and you can keep building it very tall. Okay, that’s plan A. Plan B is you see a bunch of rocks lying around on the ground and as you build your wall you find a rock that’s roughly the right shape. You stick that one in. You keep building that way.
That second thing is pretty much what machine learning is doing. When you train a neural network, what it’s doing is it’s finding these kind of lumps of computation that happen to fit into what the training looks like and so on, and so it sort of puts that rock into the wall and keeps building. And it’s something where you can build the wall to a certain height, just with these randomly shaped rocks. But it’s not something where you’re going to be able to sort of systematically build it up very tall.
I think that’s perhaps a way to think about what’s going on. But in the end, it’s that kind of, you know, machine learning is getting things roughly right and that’s a big achievement In many domains. You know, getting it roughly right, writing that essay that makes sense is great. You can’t say that essay is precisely the right essay. It’s just oh, it’s an essay that makes sense. You know it’s a distinction between what needs to be precise and really built up many, many steps, and what needs to be happen, sort of roughly right.
[It’s amusing that the TWiT transcription AI completely hallucinated a line or two of text in this part of the transcript. I had to manually edit to fix it – but that’s far easier than transcribing by hand.]
This is the kind of a metaphor a native Englishman would concoct. Anyone who has seen the PBS show All Creatures Great and Small has witnessed the fantastic drystone walls in Yorkshire Dales:
I think Wolfram’s AI proposition is quite elegant. Wolfram has been pounding on the Wolfram Language API for 37 years. Notebooks that were coded on Day 1 can still be run seamlessly today. It’s a big API, and it runs flawlessly and is meticulously documented. ChatGPT has assimilated the entire documentation-set, so it has a rather tremendous number of rocks to make the drywalls with.
Make no mistake: Stephen Wolfram’s presentation is designed to promote the strength of their AI offering. At the same time, it’s a compelling argument and he’s genuinely enthusiastic about what they’ve done. They have been grinding away for ~40 years to produce their API an the interpretation system. AFAIK, nobody has a platform that is remotely comparable to that.
Stephen writes entirely in Wolfram Notebooks. I bet everybody on staff does the same. Stephen’s What is ChatGPT Doing… And Why Does it Work is a straight computational essay. He writes with text and interjects bits of Wolfram Language code whenever appropriate. It is a tremendous way to make a presentation and a tremendous way for a company to work together.
IMHO, the only terrible thing that Wolfram Research produces is their home-rolled version of a Discord server: https://community.wolfram.com . It feels like a 20th Century bbs. It scales poorly; a discussion with >300 messages is almost unreadable. Inserting an emoji in your text will crash your front end when you try to post the message. I have asked repeately why they don’t port their discussions to Discord; I’ve never gotten an answer. I believe asking about that is the Third Rail inside of the company. Wolfram dogfoods their software like nobody in the industry; this is one place where dogfooding is a bad idea. At least they don’t try to roll their own web browser or operating system.
This interview might get some traction on YouTube. @Leo, did the production team edit this interview down and put it up?
I must’ve missed it in Windows Weekly, but I heard it in this episode: I didn’t know PaulyT’s son was deaf. Not that it makes a huge difference but I think that’s a cool fun fact. I would’ve loved to hear Paul talk more about ASL and whatnot.
I’m sure that gives Paul a leg up on accessibility topics.
They will. It’s always stimulating and fun to talk to Stephen. Even if sometimes I feel like a dog talking with God.
It sounds like you’re very familiar with Wolfram Inc. Any best practices you suggest with Notebooks? I’ve just signed up for a year!
I think Stephen hit the nail on the head for me. I come from an engineering and scientific background and the answers have to be correct. As a programmer, it was the same, you had a specification and the program did what the specification said and you removed errors.
AI is currently about having to accept bugs and errors, which is hard for someone who works in absolutes to accept.
FWIW, I thought the IM panel provided a great sounding board. Stephen does great on his own, but he does even better when very smart people are listening intelligently to the conversation.
You have a Wolfram Language license with the Notebook Assistant for a year? Wow. That’s a treasure. What plans do you have for the 2nd Seat on that license?
Conrad Wolfram has a “computational conversations” podcast that’s folded into the Wolfram YouTube channel. Mathematica with the Notebook Assistant is powerful and highly accessible. It will continue to gain powers with the back-end upgrades to ChatGPT 4.5. Stephen was coy when you asked him what LLMs they are using; I suspect they are constantly playing with ALL of them from every company. Wolfram Research views the LLM as the commodity; adding that to 40 years of proprietary curated and polished data libraries, APIs, and computation engine is Wolfram Research’s value proposition. And, BTW, I suspect that Theodore Gray is the point man on Wolfram Research’s harnesses into the LLMs. Theo is very smart; his wooden periodic table table is one of the most badass hacks of the 21st Century.
The Ais bring the code generation. What Conrad noted: humans bring abstraction to the dance; AIs can help bring those abstractions into reality. What do you grok that you know the rest of the world does not grok? What would bring you satisfaction to teach to others – or to yourself? What’s an idea that can be explained with a beautiful visualization – a visualization that includes a way to Manipulate with knobs and sliders?
Here are some examples of what’s on my plate:
Neville Hogan from MIT has made a career talking using biomimicry to engineer “soft” robots. His paper Adaptive Control of Mechanical Impedance by Coactivation of Antagonist Muscles (1984) is one of the early ones. His thesis is that rigidity can be achieved through co-activation of the robot equivalent of muscles – and softness can be achieved by lessening that co-activation. MIT, CMU, etc. have research labs about this; I don’t think it has ever broken through to STEM robotics. I value robotics as a means to better understand the human body: reverse biomimetics.
When we play the piano; there are distinct ways to strike the keys. When we play scales, we are flexing/extending our finger muscles; our arm flexors/extensors are co-activated to provide a stable platform. When we play chords, the finger muscles are co-activated – stiff hand – and our arm muscles flex and extend to play the chords. In both cases, our spiraling arm lines – pronators and supinators – are co-activated to provide a stable platform for the finger and arm flexors/extensors to do their thing.
Now! There is a third way to play the piano. Billy Joel does it during “Brenda and Eddie” in “Scenes from an Italian Restaurant” I’ll show you:
In this case, the supinators strike the low-octave key and the pronators strike the high-octave key. No flexion/extension is used to tickle the ivories; it’s all arm spiraling/rotation! The middle fingers are all relaxed; the arm flexors/extensors are only slightly co-activated. It’s a loosey-goosey way to play the piano; it’s part of the reason that we love that Billy Joel piece. The amazing thing: our CNS can rapidly learn how to play the piano these three ways. We are very good at moving new ways by imitating what we see; we are then very good at polishing those movements to be graceful and energy-efficient. OTOH, playing like Billy Joel may take a few… decades.
Here’s the point: those last 2 paragraphs are tedious. The concept probably wouldn’t stick after one reading. I’m throwing a bunch of ideas that have to be deciphered. Only the video at the end provides a visceral link to something we can easily understand. All of it can be part of a computational visualization: graphically showing what muscles are activated to move + the muscles to co-activate to provide a platform. The profound yin/yang of our structure. With Mathematica + the Notebook Assistant, you can make a museum-quality set of exhibits to explore co-activation and movement in the human body.
Example #2: twenty years ago, BoSU inventor and fitness mad scientist David Weck created a way to exercise with a jump rope without jumping over the rope. He called it RMT Ropes; enthusiasts around the world have renamed it to Flow Rope. Weck shows the “dragon roll” from flow rope:
Whoa! What’s happening? How do you move that way? What is the path of the rope?
In Dragon Roll, it turns out that the midline of the rope follows Viviani’s Curve. Mathematica is a great way to visualize that motion; many flow-ropers have no idea that the rope is criss-crossing over their heads. I was able to derive the parametric equation for the curve; I’d never done that kind of math before. There’s also the concept of impedance: phase-dependent and frequency-dependent movement in the body. As a ham radio operator, I’m sure you are familiar with electrical impedance in radios. When co-activations happen at a particular time in cyclical movement, I call that just-in-time tension. These happen everywhere in the human body all the time! Tesla knew everything about electrical impedance; he would have been gobsmacked if he had ever grokked musculoskeletal impedance.
Example #3: Anatomy Trains is an abstraction of our musculoskeletal network. It is wildly popular for manual and movement professionals; a 4th edition of a science book like that is very rare in the publishing industry. At the same time, the very idea of the Anatomy Trains has never been embraced by civilians. How about an Anatomy Trains Coloring Book? Would it be something to print and physically color, or would the “coloring” involve expansive 3D zoom and pan – and maybe some 3D printed objects? Who knows? Stay tuned!
Example #4: Geoffrey West’s Scale is a beautiful text. It explains highly technical concepts in a way that’s highly accessible. At the same time, what would be possible if a set of computational essays/visualizations were written about the ideas of the book? What beautiful minds could be introduced to this somewhat-intimidating text?
Example #4.5: Bucky Fuller’s Synergetics is rewarding, but it’s a much more challenging book than “Scale”. What could be done to bring Synergetics to life – to upgrade this text to the 21st Century?
That’s my schtick. YMMV – but feel free to steal anything I described above. My advice is to promise to publish your work regularly. It is an enormous privilege to have access to the tools that you’ve given yourself. Strive to publish 1% of what Stephen Wolfram publishes. BTW: my secret wish is for Stepen Wolfram to become a flow rope ninja.
One other suggestion: Wolfram U is doing a 1-hour class on using the Notebook Assistant on March 12 at 1PM EDT . If you pre-register, you can play back the course recording at your leisure. Instructor is Arben Kalziqi, who is a superior instructor. His cat is world-class at announcing himself during class at some point.