When AI meets doctors: Doctor’s Dilemma
It all begins with an idea.
In the digital era, everything is being transferred onto internet. One significant change is the introduction of online health tools. Since the early stages of internet, people have been thinking about transporting healthcare services onto internet. Launched in 1996, WebMD can be seen as one of the earliest platforms that offers comprehensive health information online. Since then, medical applications have been constantly developing and evolving. Currently, there is a wide range of healthcare applications. From advanced symptom checker like Ada health to everyday biological indicators like health app from Apple, they enhanced the healthcare efficiency, making it more accessible and convenient. However, they also pose threats and challenges which we might ignore.
AI into medical administration
It all begins with an idea.
To tackle the dual challenges of administrative ineHiciency and uneven GP workloads,
here we propose a combined solution: an AI-assisted system that optimizes dynamic
patient allocation. This system utilises advanced data processing and machine learning
to manage patient flow, ensuring that GP practices operate eHiciently while improving
patient access to care.
Threat of AI into healthcare: Healing in AI era
It all begins with an idea.
On the other hand, AI, from both neuroscientific and sociological point of view, will never
be capable of genuinely caring for patients. To show real caring like human does, AI need
to understand ideas such as empathy, pain and fear. To achieve that, it needs to
understand the concept of consciousness. This is diTicult as the logic behind
consciousness remains a mystery. Currently there are no valid models for human and AI
to understand consciousness, nor can AI currently possess the ability to act consciously.
An article from MIT Technology Review states that AI agents “don’t possess the right type
of feedback connections, use global workspaces, or appear to have any other markers of
consciousness.” [5] Hence, it is impossible for AI doctors to perform social care eTectively,
meaning it is certain that AI doctors cannot provide the same level of social care to
patients as human doctors can. The key reason here is AI can only mimic consciousness
through actions, it acts based on examples from surroundings, not from consciousness
within, Hence, I would argue that AI doctors naturally suTice in social care, and it will be
diTicult to improve this situation. Hence, AI agents must be treated carefully for social
care and the public as they could be potentially harmful to the eTectiveness of social
care provided and patient identities.