中考真题2025年北京市阅读D篇-AI 的问题,根源在人
中考真题2025年北京市阅读D篇-AI 的问题,根源在人

People are talking a lot about artificial intelligence (AI), viewing it as a force that could reshape how society works. But there is something important missing from this discussion. It isn’t enough to ask how it will change us. We also need to understand how we shape AI and what it can tell us about ourselves.

Every AI model we develop mirrors our rules and expresses our beliefs. A few years ago, while looking for new workers, a famous company gave up an AI-powered tool after finding it unfavorable to women. The AI was not designed to behave this way, instead, it was influenced by the historical data (数据) favoring men. Similarly, a recent study found that lending algorithms (算法) often offer less favorable terms to colored people, worsening long-standing unfairness in money-lending business. In both cases, AI isn’t creating new biases (偏见), it is mirroring the ones that are already present.

These reflections (反映) give us an important chance to take a close look at ourselves. By making these problems seen and more pressing, AI challenges us to recognize and address what causes algorithmic bias. As AI continues to develop, we must ask ourselves how we as average people want to shape its role in society. We should not only improve AI models, but also make sure that AI is developed and used responsibly.

A number of companies are already taking action. They are judging the data, rules, and beliefs that shape the behavior of AI models. Still, we cannot expect the companies to do all the work. As long as AI is trained on human data, it will reflect human behavior. That means we have to think carefully about the footprints of ourselves we leave in the world. I may value privacy, but if I give it up in a heartbeat to visit a website, the algorithms may make a very different judgment of what I really want and what is good for me. If I want meaningful human connections yet spend more time on social media and less time in the physical company of my friends, I am indirectly training AI models about the true nature of humanity.

As AI becomes more powerful, we need to take increasing care to read our principles (原则) into the record of our actions rather than allowing the two to diverge. Recognizing this allows us to make better decisions, but only when we are prepared to look closely and take responsibility for what we see.


1.1. Why does the writer introduce the two examples in Paragraph 2?

A To suggest a solution.

B To stress a difference.

C To challenge a practice.

D To support a viewpoint.

解析:选D。D.段落大意题。解析:第二段开头先提出核心观点 “每一个 AI 模型都映射着我们的规则与信念”,随后举 “AI 招聘工具歧视女性”“贷款算法对有色人种不公” 两个例子,目的是支撑这一核心观点(AI 反映人类已有的偏见);A “提出解决方案”、B “强调差异”、C “质疑一种做法” 均非举例目的,故选 D。

2.2. What does the word “diverge” in the last paragraph most probably mean?

A Improve.

B Appear.

C Separate.

D Repeat.

解析:选C。C.词义猜测题。解析:根据最后一段语境 “we need to take increasing care to read our principles into the record of our actions rather than allowing the two to diverge”,前半句意为 “让原则体现在行动中”,“rather than” 表转折,说明后半句是 “不让两者____”,结合选项:A “改进”、B “出现”、D “重复” 均不符合逻辑;C“分离、背离” 符合 —— 不让原则和行动背离,与前半句 “原则融入行动” 形成转折,故选 C。

3.3. According to the passage, what is a good example of shaping AI responsibility?

A Guarding one’s privacy against AI models.

B Being mindful of our feeds into AI models.

C Training algorithms to favor the latest data.

D Designing algorithms to deal with unfairness.

解析:选B。B.细节理解题。解析:题干问 “负责任地塑造 AI 的好例子”,原文第四段核心表述 “只要 AI 基于人类数据训练,就会反映人类行为,需仔细思考自身留下的行为痕迹”,即要留意我们输入给 AI 模型的信息 / 行为(feeds into AI models); A “防范 AI 模型侵犯隐私”:原文仅提 “重视隐私却轻易放弃” 会让 AI 误判,未说 “防范隐私被侵犯” 是负责任塑造 AI 的方式,错误;C “训练算法偏向最新数据”:文中未提及 “最新数据”,且偏向某类数据本身可能加剧偏见,错误;D “设计算法解决不公”:原文说 AI 不公源于人类数据 / 行为,仅设计算法无法根本解决,且未将其列为 “负责任塑造 AI” 的例子,错误;B“留意我们输入 AI 模型的信息 / 行为”:契合原文 “审视自身行为痕迹、对输入 AI 的行为负责” 的核心,正确。

4.4. Which of the following is the best title for this passage?

A AI Isn’t the Problem; We Are

B AI: A Tool to Reshape Our Society

C More Open algorithms for Better AI?

D Building Trust in Human-AI Relationships

解析:选A。A.主旨大意题。解析:全文核心逻辑:AI 的偏见 / 问题并非 AI 本身造成,而是人类的规则、数据、行为的反映,塑造 AI 的关键是人类审视自身并承担责任; A“AI 不是问题所在,我们才是”:精准概括核心主旨,符合;B“AI:重塑社会的工具”:仅对应开头第一句,未覆盖全文 “人类塑造 AI、反思自身” 的核心,片面;C“更开放的算法造就更好的 AI?”:文中未提及 “开放算法”,偏离主旨;D “建立人机关系的信任”:全文未围绕 “信任” 展开,无中生有。