TWIG 718: Clean As a Whistle

Beep boop - this is a robot. A new show has been posted to TWiT…

What are your thoughts about today’s show? We’d love to hear from you!

I agree, at the top of the show, there is an over hype of the risk of AI. I’m more worried about another pandemic or war than AI as the next global threat.

Leo and the twit team I can’t express how I enjoy your analysis on the up and coming tech. You’re coverage on Ai is a public service at the very least and wealth of information.

I was shock to learn how wrong I was about Nvidia. Talk about a company i underestimate.

However, there are a few tech topic I would have you take on. I’m interested in what you have to say, from a logistical perspective, not political. I would like to know what you think of China’s Micron ban. Did China shoot themselves in the foot with this ban? From my understanding and i just want to hear a smarter persons take on this, it’s sounds to me like Korea is the only market China has left to import chips. Other than the ones they can produce in house.

I also wanted you take on Amazon shutting down operations in China.

As well as, and it’s not clear to me, China’s reversal on Bitcoin?

So yeah, i guess these are viewers question and if these topic have been talks about, could someone please direct me to the shows and episodes on these topics.

Thank you, keep up the good work. I’m enjoying your content Evey week.

2 Likes

A great discussion this week.

AI is interesting, but it still has a long way to go, before it will be really useful. AI has been “the next big thing” since I was in computing, along with Quantum, back in the early 80s. I went to my first AI symposium when I was in college, back around 1985. We have seen several major steps forward over the years, but mainly in performance - how quickly it can give an answer and how complex that answer is. But nobody has been able to crack the important part, accuracy and it seems that this is still the case. This is why we should be holding back on AI in public use. They are being marketed as what they are not and many people will believe the marketing hype and use them inappropriately or without controls (see the case of the lawyer last week as a prime example of this - I can save time and let ChatGPT do the research, I don’t need to waste money on researchers = more money in our pockets, BINGO! Only he didn’t even proof-check what he was given, just submitted it as-is. As Julia Roberts would say, “mistake, big, huge!”

AI is still in the lab stage, yet resources for research are limited these days, so it is being marketed as the next cash cow, even though it is a long way short of being market ready. As I pointed out elsewhere a few weeks ago, AI research (pure research) has fallen off a cliff in the USA in the last decade, with Princeton, I think, being the only US institution in the top 10 (place 9) for releaseing academic papers on AI, the other 9 are all Chinese. There is just no (or comparatively little) funding any more for research. It is all moving to internal R&D in companies.

As to the EU wanting to ban “open source AI”, that is Jeff doing a moral panic moment.

What they are trying to do has nothing to do with AI, per se, but with all software. They are trying to make it that the weasel words in EULAs, that say that the company is not responsible for any inaccuracies in their software or the information or advice it provides, are no longer valid. The software companies will be held liable for their mistakes. If your software causes a company’s production line to fail or it incorrectly mixes toxic chemicals into food (as an extreme example) or the self-driving mode of a car causes an accident or runs over a child, the company that made the software will be responsible for all costs incurred.

If they can prove that the user used it wrong, then the blame will stay where it is today, but if the user used the software within the guidelines provided by the software company and the software messed up, the software company is responible.

This is something that many companies have wanted for decades, I remember it being a big issue back when I came into the industry in the late 80s, and it hasn’t changed much since, but the quality of the software seems to be going steadily down hill, because it is cheaper to push out buggy software and get the users to test it, rather than doing full inhouse testing.

Back in the 80s and 90s, we couldn’t just bung a new version out to all users every night. We’d have to arrange time to make a full build, write it to tape and courier the tape out to the clients, or sending consultants and programmers to work at the client’s facility to correct the errors, that was very expensive, so a lot of time was invested in testing and getting the software right first time. Obviously, software is rarely 100% accurate and problems did crop up, but they were relatively few, compared to today.

I designed a corporate reporting system for 1 customer, it went out to 100 facilities in 60 countries and was used for 3 years that I was working on contract to them. In that time, we had 1 bug report - and that turned out to be a bug in Windows 95, not our software. Fast forward to the 2010s, I was working for an advertising and design company that produced ecommerce websites for its customers. The testing was rudimentary and problems were fixed on-the-fly as they occured, no real testing, just throw fixes into the live system during the sales, because it was collapsing every couple of mintues, because nobody had tested it under heavy load, before releasing it to live!

This sort of corner cutting is enabled by the software companies having little competition (“you want to switch from our several million dollar ERP system, because it is slow and crashes? Good luck, it will cost you more than the lost productivity to switch, if you can find a system that is more reliable, that is!”), or Windows even more so, if Windows is causing problems, are they enough that it would be more cost effective to roll out 10s of thousands of new software images with Linux and retrain all of your staff - and lose productivity, until they are trained and up to speed? Or even worse, roll out complete new hardware based on Apple Silicon and then the same training and productivity hits, assuming you can find software on those platforms that will work just as it does on Windows; I’m talking LOB software here, not office suites, photo editing etc. but ERP, CRM, telephony, manufacturing systems, finance software etc.

Some of that is moving to web based and cloud, but a lot isn’t, and that is still a huge project and still requires the loss of productivity and retraining, until the new systems are fully integrated.

So, the EU want to improve the quality of software by making the producers actually liable for releasing shoddy, second rate software and the open source community is up in arms about this, either they will lose out, because businesses won’t use their software, because they have nobody to point the finger at, when there is a problem (this has always been the case, with OSS you have to sort out the problem yourself or go to forums, or, if you are lucky, there is a consulting company that will take on support, for similar money to paying support for a closed source project), with proprietary software, you pick up the phone, rant at the tech support and wait for their engineers to sort out the problem for you.

Alternatively, if the OSS software goes belly-up, it could be the individual programmers of the various projects that are personally liable for down-time at a big company…

That is still unclear, is OSS covered, if so, will it mean companies won’t use OSS, because there is no company to sue, or will the programmers be made liable? (Yes, there are some big projects around, like Apache, Mozilla, Ubuntu etc. who would be targets if projects under their care cause problems, but they aren’t richt companies, they are non-profit foundations, so getting any recompence from them would be difficult anyway).

1 Like

I find it interesting when listening to a variety of podcasts that people who talk about what we should teach kids often model behaviors that run contrary to the notion of learning.

For example, Jeff does his “moral panic” thing during the discussion on AI around the 20-minute mark. In doing so, he’s modeling the very thing he’s accusing others of doing: creating a sense of panic and hysteria. There’s a reason for the saying “Far more is caught than taught.” This is what cable news outlets do 24/7. Instead, I’m looking for what Joe Friday called for on Johnny Carson: “Just the facts, ma’am.”

The alternative to this drama-filled panic content is why I listen to TWIT. In spite of the panicky conversations around security, AI, news, and social media, I’m listening to learn.

The great conversation that followed the moral AI panic bit on different types of chips and real-world applications for AI is the kind of conversation where real learning begins and critical thinking skills get their roots and are developed as knowledge is applied to real world projects.

You can’t teach critical thinking or creativity, but in the proper environment, those traits develop and grow.

I think you might misunderstand Jeff’s moral panic. He’s not calling for thoughtless panic - he’s criticizing it. So often the knee-jerk reaction to new technologies is to either lionize or condemn them. I agree with you (and Jeff) a more thoughtful and measured response is called for.

2 Likes

This was a great show! I waited to the end and was pleasantly surprised :joy:

1 Like