I agree. Google are famous for the processing they put into the photos, as opposed to the actual quality of the sensor and the glass. If you remove that, you will get poorer results. Some cameras have great sensors and lenses (Huawei, for example, with their cooperation with Zeiss) and those could benefit from software that generates, for example, RAW photos and allow the manual exposure etc. to be set.
But, at the end of the day, even the Huawei cameras aren’t a patch on a full sized sensor and a decent bit of glass - if you look at a DSLR (or new system camera), a good lens often cost more than a smartphone. And there is a reason why those lenses are so big and long.
The quality of the lenses on a smartphone are severely restricted by its form factor. For what they are, they are very good. But the lens on a $1,000 smartphone can’t compete with a $1,000 lens.
Given Google’s track record with their own photo app, I’d stick with that, unless you have specific experiments or scenarios that the post-shot algorithms struggle with.