SN 917: Zombie Software

Regarding ChatGPT in Italy, I already posted about that a few days ago:

But, going to Steve’s comments and Ant’s comments about Adobe, in Adobe’s case, the information/images have been submitted to Adobe for inclusion in their system, which means all releases have been obtained in advance.

GDPR specifically states that you cannot scoop up information about people, without first getting their premission. There are excemptions for search machines, because they “just point to the source information”, but GDPR expressly says that information cannot be sold or given to third parties (i.e. scraped) without the express permission of identifiable persons (edit) and can only be used for the purposes expressly stated when getting the permission - you cannot say you are gathering it for training purposes and then use it for a sales database, for example. Again, there is an exception for prominent people who, given their position in public life (politicians, actors, authors etc. who it could be argued are in the public eye).

ChatGPT is scraping this information and using it as a basis for generating (often incorrect) responses, which could also fall foul of liable laws.

BUT: some companies have caught their directors and managers pumping information into ChatGPT without getting the required authorisation. For example, directors pushing the companies (internal) annual report into ChatGPT & asking it to provide a PowerPoint slide deck with the pertinent information. Likewise, managers, at review time, have been found pushing personal information about their subordinates into ChatGPT and asking for a summary of their performance!

Such input of information, without the identifiable person’s permission falls 100% under GDPR and such information would have to be removed from ChatGPT (and a way found to stop such information being put into it in the future), before it could resume services in Italy - and GDPR covers the whole of the EU plus Great Britain, so if the accusations against ChatGPT are found to be true, it could find itself banned in the whole of Europe - as would any other AI that uses unfettered access to information, without first checking to see if the can legally use that information (there are similar arguments about copyright, for example).

Doing what ChatGPT has done for a lab experiment, where the information is just used internally, could probably be defended. But the wholesale gobbling up of private and copyrighted material, without first getting the necessary legal clearances, and presenting it as a fait accompli to the general public is a very different matter.

In essence, this has nothing to do with AI in particular, but is an indicment of Silicon Valley’s and Big Tech in generals’ laissez-fair attitude to the law. “We will ignore the law, as long as fines & lawyers fees cost us less than doing the right thing!”

When they are small or they are doing internal development, security and compliance with the law is ignored. Then, suddenly, the product “goes big” and it is too late to do things properly.

We have seen this with YouTube and copyright infringement. Instead of dealing with the problem when they were still small and scaling up the solution, once they hit popularity, they ignored the law until they were so big that the rest of the world took notice, then said that the scale of the problem was too big and they couldn’t comply with the law, because it would be too expensive, so they offered unsuitable workarounds instead.

AI companies look to be using this bad behaviour as a role-model for the role out of their own technology. “Who cares about the law, we are Big-AI!”

3 Likes