Elon Musk aspires for his artificial intelligence (AI) Grok to be the most efficient in the world in many fields, particularly medical. To do this, he does not hesitate to make rather daring requests to users of X, formerly Twitter.
Important errors
Grok, developed by Elon Musk’s young startup xAI, is accessible to paying subscribers to X. And to train it, the company uses the vast amount of data present on the social network. The billionaire now seems to want to move up a gear so that AI is also effective in interpreting medical imaging results.
“ Try submitting X-rays, PET scans, MRIs, or other medical images to Grok for analysis. This is still an early stage, but it is already quite accurate and will get better and better. Let us know if Grok is right or needs improvement », he indicated in a publication on passed unnoticed.
Some Internet users complied and, the least we can say is that Grok still has work to do. Indeed, the model misinterpreted several examinations according to doctors who reacted on the platform. For example, he confused a “ school case » tuberculosis with a herniated disc or spinal stenosis. In another case, he considered a mammogram of a benign breast cyst to be an image of testicles.
This is classical textbook case of Tuberculosis (TB) – There is paradiscal change! But it is not even in the differential diagnoses! :/
Needs to improve a lot @elonmuskmaybe it needs to especially needs to know the location of the user uploading the image as well, because… pic.twitter.com/KzfOYswNKj
— Dr. Datta (AIIMS Delhi) (@DrDatta_AIIMS) October 29, 2024
And what about personal data in all this?
In addition to misinterpretations which could have significant repercussions on users trusting AI, Elon Musk’s request raises several questions. The entrepreneur seems to place the development of Grok before everything else, even if it means making mistakes.
Because asking users to provide their data directly, rather than extracting it from secure databases containing anonymized patient information, is contrary to medical methodologies. Information from a very limited sample, i.e. those who agree to share their images and tests, does not represent a large and diverse population.
Likewise, this information is not subject to the U.S. Health Insurance Portability and Accountability Act, which protects private patient data from being shared without their consent. This approach risks accidentally disclosing sensitive informationbecause personal data is often integrated into medical images.
Since large language models rely on the conversations they have to refine their abilities, this personal data will almost certainly be used to train Grok. It is therefore not excluded that it inadvertently discloses sensitive information, especially when we know the propensity of AI to hallucinate.
- Elon Musk wants X users to share their medical imaging exams on the platform.
- This will allow Grok, its artificial intelligence, to be trained to interpret them.
- A methodology that raises many questions, both from a medical and ethical point of view.