Hope I havent ‘borrowed’ any trademarks here, but I am wondering how far other readers would let a digital co-pilot go?
The ‘AI’ helper apps the like of Bing AI, Bard, ‘FaceBook Friend’, ‘Apple Playmate’, etc are only as useful as their inputs, which in turn is only as useful as its perceived context,
A multi-streamed input will give more balance results, while better context should give a more accurate answer to a query/problem.
Would you be prepared to let one of the above ‘helpers’ access to your social media, after all you are sharing it with the world anyway. How about basic information on your contacts?
My recent searches for holiday ideas culminated in a city break in Amsterdam, I have a few places earmarked to try for food, but it might be handy if it suggested asking my friend Jan who lives in Amsterdam, what he thinks about them, or where he would suggest for something similar, maybe even to go as far as to ask him if has time to join us.
Would you have this ‘shadow self’ as a stand alone app, or as a service that is part of the OS or maybe even a cloud service
What do you think? Am I seeing the world through rose tinted glasses? Would my shadow self be hacked by a team of journalists from the New York Times?
I wouldn’t trust it to do anything reliably at the current time. The results are too unpredictable, from what I’ve seen, and sometimes provide useful information, but are often wrong, or I have a gut feeling the results aren’t 100%, so I end up doing “non-AI” research anyway to confirm the results given.
This current generation of AI is interesting, probably as interesting as my first encounter with AI in the late 80s, but I still see it as a curiosity in its current form, rather than a “co-pilot” or authority on anything.
I have a feeling that the current stage is great for some limited functionality - Brad Sams over on Paul Thurrott’s podcast summed it up best, I think. It would be great in something like Excel, where you tell it to take the entered data and perform an analysis on it and how the output should be formatted. That is something that should be able to do very well at this current stage, but there are too many unanswered questions and the understanding of the models is too vague and there are no real ways to test the results for accuracy.
There still needs to be a lot of work done, before these things are really ready for prime-time use.
Given that something like the Google Home is not reliable enough to trust, I can’t imagine trusting any agent that is supposed to integrate any more deeply into my life. I think the progression needs to be from lots of “public” agents that are reliable (things like voice agents for booking and customer support) and then transportation driving agents (self-driving cars and the like,) then robot health agents (lifting people in and out of bed/wheel chairs and the like, then maybe so far as bathing, feeding and other care). Once these things have demonstrated their worth and reliability in public, only then would I consider inviting one into my private life.
I am on the other side I am afraid, as a self confessed ‘old fart’ I have a limited shelf life and a not very wonderful memory, I would value the help. I think for most people it doesnt have to be perfect to be useful People already trust Google way too much and some even trust politicians!
Yeah, and I don’t trust Google as far as I could throw one of their data centres… I use DDG , and I generally use 4 or 5 sources, if I am trying to verify something, or look for results from an authoritative source, if I know of one for the subject at hand.