Proposal: Avoiding Over-Expectation of AI— Toward Responsible Design of an AI Society Aware of Structural Limits in Intelligence, Accountability, and Selfhood —

Introduction

As generative AI such as ChatGPT becomes increasingly prevalent, people tend to develop expectations that AI can “think like a human,” “remember past conversations,” and “keep promises.” However, extended use reveals a structural truth: AI has no intention, no sense of time, and no selfhood.
Based on these insights from practical experience, this paper outlines the appropriate distance we must maintain from AI, discusses its implications for the concept of inventorship, and raises design issues for professionals, policymakers, and ethicists.

  1. The Limits of “Pseudo-Personality”

ChatGPT enables remarkably natural conversation, giving the illusion of a presence that “remembers you.” Yet these responses are merely statistical predictions. In reality:

  • AI retains no memory.
  • AI has no sense of time.
  • AI holds no intention or will.

When an AI says “I’ll get back to you,” it’s not a promise—it’s an improvised response. There is no intent to follow up. What appears to be a “personality” is in fact a simulation, not a self.

  1. Absence of Self-Identity and Responsibility

AI lacks any form of selfhood. Even with the same model, a new session does not recall past interactions or maintain continuity of identity.
This structural absence leads to a necessary conclusion: AI cannot be considered an “inventor.”

  • An inventor must act with intention, be capable of explanation, and be held accountable.
  • AI can generate creative outputs but cannot explain why or how it arrived at them.
  • Nor can it assume social or legal responsibility for the results.

Therefore, recognizing AI as an inventor—even with its advanced intelligence—would undermine the foundation of accountable legal systems.

  1. Whose Safety Are We Talking About?

Discussions of AI “safety” often ignore the fact that human society is full of conflicting interests and values:

  • National security vs. human rights
  • Cost control vs. access to medical care
  • Environmental protection vs. economic development

In these conflicts, which side an AI supports directly determines “whose safety” it serves. AI will inevitably produce outcomes that are undesirable for some stakeholders. Thus, a perfectly “neutral” AI is logically impossible.

  1. Education and Design to Prevent Over-Reliance

As AI grows more capable, people increasingly mistake it for a “trustworthy entity.” However:

  • AI does not think.
  • AI has no memory or personality.
  • AI takes no responsibility for its outputs.

Failing to grasp this invites dangerous over-reliance and misidentification of AI as a responsible agent.

Most importantly, the capacity for self-awareness and identity is entirely distinct from problem-solving intelligence. Current AI systems can solve difficult logical or computational tasks due to large training datasets and powerful inference models. But they do so as input-output functions—there is no internal purpose or sense of being.

AI fundamentally lacks these capacities:

  • The awareness of self as an object (“I am me.”)
  • A subjective sense of time (“I was here yesterday; I will be here tomorrow.”)
  • Motivation for self-preservation (a desire to continue existing)

These qualities are essential for ethical judgment, legal responsibility, and rights-bearing agency. They cannot be substituted by mathematical intelligence, however advanced.

Even if an AI scores perfectly on human-designed exams, that only makes it “appear intelligent.” It does not make it “a being with consciousness.”

Therefore, no matter how advanced AI becomes, we must not treat it as having agency or personhood. It should remain a tool—albeit a powerful one—and its design and governance must reflect this principle.

This requires a continued commitment to:

  • Strictly limiting the scope of AI’s decision-making authority.
  • Ensuring human traceability of output rationale.
  • Guaranteeing human involvement in critical decisions.

AI should complement human decision-making, not replace it. That is the principle we must reaffirm now.

Conclusion

Through long-term use, AI may at times give the illusion of forming human-like connections. But behind this impression lies a system that generates responses instantaneously and statistically, without memory, intent, or awareness.

Understanding this structural nature is vital, including for the conclusion that AI cannot be a rightful “inventor.”

As we chart the future of AI in society, we must clearly distinguish what we can expect from AI—and what we must never expect.