📖 GFP English Reading Comprehension Test

GE3 MOCK READING EXAM

⏱️ Time: 60:00
Step 1 of 3

👤 Student Information

📚 READING INSTRUCTIONS

⚠️ IMPORTANT: Read the passages below and answer the questions that follow. You may write your answers on the question paper, but you MUST transfer your answers to the answer sheet before the 60 minutes are over. You will NOT be given any extra time at the end to do this.

Section 1: The Evolution of Artificial Intelligence

A. In 1950, British mathematician Alan Turing published a groundbreaking paper titled "Computing Machinery and Intelligence" in the journal Mind. In this paper, he proposed what would become known as the Turing Test, a method for determining whether a machine can exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing asked the question, "Can machines think?" and suggested that if a machine could engage in a conversation with a human without being detected as a machine, it could be considered intelligent. This simple yet profound idea laid the philosophical foundation for the field of artificial intelligence. Turing's work was revolutionary because it was one of the first serious academic attempts to consider machine intelligence not as science fiction, but as a legitimate scientific pursuit.

B. The term "artificial intelligence" was officially coined in 1956 at a conference held at Dartmouth College in New Hampshire, USA. The conference was organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, who are now considered the founding fathers of AI. The event brought together leading researchers who shared a common belief that "every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it." During this conference, the researchers set ambitious goals, predicting that machines would be able to solve complex problems within a generation. Early AI programs developed in the late 1950s and 1960s showed promise, including the Logic Theorist, which could prove mathematical theorems, and ELIZA, a program that could simulate conversation by recognizing keywords and responding with pre-programmed phrases.

C. Despite early optimism, the field of AI experienced several periods of reduced funding and interest, known as "AI winters." The first AI winter occurred in the 1970s when researchers realized that the problems were far more complex than initially thought. Computers lacked the processing power and memory needed to handle real-world tasks, and many promises made by AI researchers failed to materialize. Funding agencies became skeptical and reduced their support dramatically. A second AI winter happened in the late 1980s and early 1990s after the collapse of the market for specialized AI hardware and the failure of expert systems to deliver on their promises. However, these setbacks ultimately proved beneficial, as they forced researchers to develop more realistic expectations and focus on solving specific, practical problems rather than attempting to create general human-level intelligence.

D. The field experienced a renaissance in the 1990s and 2000s, driven by advances in computing power, the availability of large datasets, and improvements in algorithms. Machine learning, particularly a technique called deep learning, became increasingly powerful. Deep learning uses artificial neural networks inspired by the human brain's structure, with multiple layers of interconnected nodes that can learn from vast amounts of data. In 1997, IBM's Deep Blue defeated world chess champion Garry Kasparov, marking a significant milestone in AI history. In 2011, IBM's Watson won the quiz show Jeopardy against human champions, demonstrating AI's ability to understand natural language and retrieve information. More recently, in 2016, Google's AlphaGo defeated the world champion of Go, an ancient board game far more complex than chess, using deep reinforcement learning techniques.

E. Today, artificial intelligence is embedded in many aspects of daily life, often in ways people do not notice. Virtual assistants like Siri, Alexa, and Google Assistant use natural language processing to understand and respond to voice commands. Recommendation systems on platforms like Netflix, Spotify, and Amazon analyze user behavior to suggest content and products. Autonomous vehicles use computer vision and sensor fusion to navigate roads safely. In healthcare, AI algorithms can detect diseases from medical images with accuracy rivaling or exceeding that of human doctors. The global AI market was valued at approximately $136 billion in 2022 and is projected to reach $1.8 trillion by 2030. However, this rapid growth has also raised important ethical questions about privacy, job displacement, algorithmic bias, and the potential risks of creating superintelligent systems that might act in ways contrary to human values.

Section 2: The Concerns About Artificial Intelligence

"We are standing at the edge of a technological revolution that will fundamentally alter the way we live, work, and relate to one another," writes Professor Elena Rodriguez, director of the Center for AI Ethics at MIT. She notes that while AI offers tremendous benefits, it also poses significant challenges that society must address proactively.

One of the most pressing concerns is job displacement. A 2023 report by the McKinsey Global Institute estimated that by 2030, between 75 million and 375 million workers worldwide may need to switch occupational categories due to automation and AI. Jobs involving routine, repetitive tasks are particularly vulnerable. Manufacturing workers, data entry clerks, telemarketers, and even some white-collar professionals like paralegals and radiologists face potential displacement. However, the same report notes that AI will also create new job categories, particularly in fields requiring human creativity, emotional intelligence, and complex problem-solving. The challenge lies in ensuring that workers can transition to these new roles through education and retraining programs.

Privacy concerns represent another critical issue. AI systems often require vast amounts of personal data to function effectively. Facial recognition technology, for instance, has become increasingly accurate and is now used for everything from unlocking smartphones to surveillance in public spaces. Dr. James Liu, a privacy researcher at Oxford University, conducted a study examining facial recognition systems deployed in 50 major cities worldwide. He found that 68% of these systems had inadequate safeguards against misuse, and 43% stored biometric data indefinitely without proper consent protocols. Dr. Liu warns that "the combination of AI and big data creates unprecedented opportunities for surveillance, both by governments and corporations, potentially threatening fundamental freedoms."

Algorithmic bias is equally troubling. AI systems learn from historical data, which often contains human biases. If training data reflects societal prejudices, the AI will perpetuate or even amplify these biases. In 2018, researchers discovered that a major tech company's AI recruitment tool showed bias against women because it had been trained on resumes submitted over a ten-year period, during which the majority of applicants were male. Similarly, studies have shown that facial recognition systems have higher error rates when identifying people with darker skin tones, leading to concerns about discriminatory practices in law enforcement and security applications. Amazon's facial recognition technology was found to be 31% less accurate at identifying women with darker skin compared to men with lighter skin.

Perhaps the most existential concern involves the development of artificial general intelligence (AGI) and superintelligence. While current AI systems excel at specific tasks, they lack general intelligence and consciousness. However, some experts believe that AGI—machines that can understand, learn, and apply knowledge across a wide range of tasks at human level or beyond—could emerge within decades. Prominent figures like Stephen Hawking and Elon Musk have warned about the potential dangers. In a famous statement, Hawking said that while AI could be "the biggest event in human history," it could also be "the last, unless we learn how to avoid the risks." The concern is that superintelligent AI might pursue goals misaligned with human welfare, and once created, might be impossible to control or shut down. As a result, many researchers now advocate for the development of "aligned AI" that shares human values and goals.

📖 Reading Passage

Section 1 and Section 2 (Reference)

Section 1: The Evolution of Artificial Intelligence

A. In 1950, British mathematician Alan Turing published a groundbreaking paper titled "Computing Machinery and Intelligence" in the journal Mind. In this paper, he proposed what would become known as the Turing Test, a method for determining whether a machine can exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing asked the question, "Can machines think?" and suggested that if a machine could engage in a conversation with a human without being detected as a machine, it could be considered intelligent. This simple yet profound idea laid the philosophical foundation for the field of artificial intelligence. Turing's work was revolutionary because it was one of the first serious academic attempts to consider machine intelligence not as science fiction, but as a legitimate scientific pursuit.

B. The term "artificial intelligence" was officially coined in 1956 at a conference held at Dartmouth College in New Hampshire, USA. The conference was organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, who are now considered the founding fathers of AI. The event brought together leading researchers who shared a common belief that "every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it." During this conference, the researchers set ambitious goals, predicting that machines would be able to solve complex problems within a generation. Early AI programs developed in the late 1950s and 1960s showed promise, including the Logic Theorist, which could prove mathematical theorems, and ELIZA, a program that could simulate conversation by recognizing keywords and responding with pre-programmed phrases.

C. Despite early optimism, the field of AI experienced several periods of reduced funding and interest, known as "AI winters." The first AI winter occurred in the 1970s when researchers realized that the problems were far more complex than initially thought. Computers lacked the processing power and memory needed to handle real-world tasks, and many promises made by AI researchers failed to materialize. Funding agencies became skeptical and reduced their support dramatically. A second AI winter happened in the late 1980s and early 1990s after the collapse of the market for specialized AI hardware and the failure of expert systems to deliver on their promises. However, these setbacks ultimately proved beneficial, as they forced researchers to develop more realistic expectations and focus on solving specific, practical problems rather than attempting to create general human-level intelligence.

D. The field experienced a renaissance in the 1990s and 2000s, driven by advances in computing power, the availability of large datasets, and improvements in algorithms. Machine learning, particularly a technique called deep learning, became increasingly powerful. Deep learning uses artificial neural networks inspired by the human brain's structure, with multiple layers of interconnected nodes that can learn from vast amounts of data. In 1997, IBM's Deep Blue defeated world chess champion Garry Kasparov, marking a significant milestone in AI history. In 2011, IBM's Watson won the quiz show Jeopardy against human champions, demonstrating AI's ability to understand natural language and retrieve information. More recently, in 2016, Google's AlphaGo defeated the world champion of Go, an ancient board game far more complex than chess, using deep reinforcement learning techniques.

E. Today, artificial intelligence is embedded in many aspects of daily life, often in ways people do not notice. Virtual assistants like Siri, Alexa, and Google Assistant use natural language processing to understand and respond to voice commands. Recommendation systems on platforms like Netflix, Spotify, and Amazon analyze user behavior to suggest content and products. Autonomous vehicles use computer vision and sensor fusion to navigate roads safely. In healthcare, AI algorithms can detect diseases from medical images with accuracy rivaling or exceeding that of human doctors. The global AI market was valued at approximately $136 billion in 2022 and is projected to reach $1.8 trillion by 2030. However, this rapid growth has also raised important ethical questions about privacy, job displacement, algorithmic bias, and the potential risks of creating superintelligent systems that might act in ways contrary to human values.

Section 2: The Concerns About Artificial Intelligence

"We are standing at the edge of a technological revolution that will fundamentally alter the way we live, work, and relate to one another," writes Professor Elena Rodriguez, director of the Center for AI Ethics at MIT. She notes that while AI offers tremendous benefits, it also poses significant challenges that society must address proactively.

One of the most pressing concerns is job displacement. A 2023 report by the McKinsey Global Institute estimated that by 2030, between 75 million and 375 million workers worldwide may need to switch occupational categories due to automation and AI. Jobs involving routine, repetitive tasks are particularly vulnerable. Manufacturing workers, data entry clerks, telemarketers, and even some white-collar professionals like paralegals and radiologists face potential displacement. However, the same report notes that AI will also create new job categories, particularly in fields requiring human creativity, emotional intelligence, and complex problem-solving. The challenge lies in ensuring that workers can transition to these new roles through education and retraining programs.

Privacy concerns represent another critical issue. AI systems often require vast amounts of personal data to function effectively. Facial recognition technology, for instance, has become increasingly accurate and is now used for everything from unlocking smartphones to surveillance in public spaces. Dr. James Liu, a privacy researcher at Oxford University, conducted a study examining facial recognition systems deployed in 50 major cities worldwide. He found that 68% of these systems had inadequate safeguards against misuse, and 43% stored biometric data indefinitely without proper consent protocols. Dr. Liu warns that "the combination of AI and big data creates unprecedented opportunities for surveillance, both by governments and corporations, potentially threatening fundamental freedoms."

Algorithmic bias is equally troubling. AI systems learn from historical data, which often contains human biases. If training data reflects societal prejudices, the AI will perpetuate or even amplify these biases. In 2018, researchers discovered that a major tech company's AI recruitment tool showed bias against women because it had been trained on resumes submitted over a ten-year period, during which the majority of applicants were male. Similarly, studies have shown that facial recognition systems have higher error rates when identifying people with darker skin tones, leading to concerns about discriminatory practices in law enforcement and security applications. Amazon's facial recognition technology was found to be 31% less accurate at identifying women with darker skin compared to men with lighter skin.

Perhaps the most existential concern involves the development of artificial general intelligence (AGI) and superintelligence. While current AI systems excel at specific tasks, they lack general intelligence and consciousness. However, some experts believe that AGI—machines that can understand, learn, and apply knowledge across a wide range of tasks at human level or beyond—could emerge within decades. Prominent figures like Stephen Hawking and Elon Musk have warned about the potential dangers. In a famous statement, Hawking said that while AI could be "the biggest event in human history," it could also be "the last, unless we learn how to avoid the risks." The concern is that superintelligent AI might pursue goals misaligned with human welfare, and once created, might be impossible to control or shut down. As a result, many researchers now advocate for the development of "aligned AI" that shares human values and goals.

❓ Questions (1–25)

Section 1: The Evolution of Artificial Intelligence (Questions 1–15)

📋 LIST OF HEADINGS for Questions 1-4

Heading 1: A mathematician proposes a test for machine intelligence
Heading 2: AI achievements through advanced computing methods
Heading 3: AI applications transform modern society
Heading 4: The birth of a new scientific field
Heading 5: Funding challenges and periods of decline

Matching Headings (Questions 1–4)

INSTRUCTIONS: Choose the correct heading number (1-5) for paragraphs B, C, D, and E from Section 1. Write ONLY THE NUMBER in your answer. The first one (Example A = 1) has been done for you.

Questions 5 to 10: True / False / Not Given

INSTRUCTIONS: Do the following statements agree with the information given in the reading passage?

In the correct space on your answer sheet, write:

  • TRUE (T) if the statement agrees with the information
  • FALSE (F) if the statement contradicts the information
  • NOT GIVEN (NG) if there is no information on this

5. Alan Turing was the first person to use the term "artificial intelligence."

6. The Dartmouth Conference took place before Turing published his paper on machine intelligence.

7. AI winters were caused by overly optimistic predictions and technical limitations.

8. Deep Blue was the first AI to defeat a world champion in any game.

9. Deep learning is inspired by the structure of the human brain.

10. The global AI market is expected to exceed one trillion dollars by 2030.

Questions 11–15: Summary Completion

INSTRUCTIONS: Complete the text below with words from the box. Choose NO MORE THAN ONE WORD OR A NUMBER from the box for each answer.

📦 WORD BANK: virtual, healthcare, $1.8, privacy, behavior, assistants, vision

⚠️ IMPORTANT: Write the words in the correct space on your answer sheet. Answers with incorrect spelling will be marked wrong.

Artificial intelligence has become integrated into everyday life in numerous ways. 11. assistants like Siri and Alexa can understand voice commands through natural language processing. Platforms such as Netflix use recommendation systems that analyze user 12. to suggest relevant content. Self-driving cars rely on computer 13. to safely navigate roads. In the field of 14., AI can identify diseases from medical scans with high accuracy. The AI market is forecast to reach 15. trillion by 2030, though this growth raises concerns about 15. and ethics.

Section 2: The Concerns About Artificial Intelligence (Questions 16–25)

Questions 16–20: Multiple Choice

INSTRUCTIONS: Choose the correct letter, A, B or C. Write ONLY the correct letter on your answer sheet.

16. According to Professor Elena Rodriguez, AI technology

17. The McKinsey report suggests that by 2030

18. Dr. James Liu's research found that most facial recognition systems

19. The AI recruitment tool showed bias because

20. Stephen Hawking warned that AI could be

Question 21: Sentence Completion

INSTRUCTIONS: Complete the sentence below with a word taken from Reading Section 2. Use ONE WORD for your answer.

⚠️ IMPORTANT: Write the answer in the correct space on your answer sheet. Answers with incorrect spelling will be marked wrong.

Questions 22–25: Short Answer

INSTRUCTIONS: Write NO MORE THAN TWO WORDS AND/OR A NUMBER for each answer.

⚠️ IMPORTANT: Write the answer in the correct space on your answer sheet. Answers with incorrect spelling will be marked wrong.

Evaluating your reading comprehension answers...