🔖IME | Inglês | 2026 | Questão 38 Comentada | 🏛️ B3GE™

IME – Concurso de Admissão 2026
Imagem ilustrativa | Instituto Militar de Engenharia (IME).

🟨 TEXT 3.

A new "eye" may radically change how robots see

The low-power robotics system LENS merges a brainlike sensor, a chip and an AI model
By Kathryn Hulick

This hexapod robot recognizes its surroundings using a vision system that occupies less storage space than a single photo on your phone.

Running the new system uses only 10 percent of the energy required by conventional location systems, researchers report in the June Science Robotics. Such a low-power “eye” could be extremely useful for robots involved in space and undersea exploration, as well as for drones or microrobots, such as those that examine the digestive tract, says roboticist Yulia Sandamirskaya of Zurich University of Applied Sciences, who was not involved in the study.

The system, known as LENS, consists of a sensor, a chip and a super-tiny AI model to learn and remember location. Key to the system is the chip and sensor combo, called Speck, a commercially available product from the company SynSense. Speck’s visual sensor operates “more like the human eye” and is more efficient than a camera, says study coauthor Adam Hines, a bioroboticist at Queensland University of Technology in Brisbane, Australia.

Cameras capture everything in their visual field many times per second, even if nothing changes. Mainstream AI models excel at turning this huge pile of data into useful information.

But the combo of camera and AI guzzles power. Determining location devours up to a third of a mobile robot’s battery.

“It is, frankly, insane that we got used to using cameras for robots,” Sandamirskaya says. In contrast, the human eye detects primarily changes as we move through an environment.

The brain then updates the image of what we’re seeing based on those changes. Similarly, each pixel of Speck’s eyelike sensor “only wakes up when it detects a change in brightness in the environment,” Hines says, so it tends to capture important structures, like edges.

The information from the sensor feeds into a computer processor with digital components that act like spiking neurons in the brain, activating only as information arrives — a type of neuromorphic computing.

The sensor and chip work together with an AI model to process environmental data. The AI model developed by Hines’ team is fundamentally different from popular ones used for chatbots and the like.

It learns to recognize places not from a huge pile of visual data but by analyzing edges and other key visual information coming from the sensor.

This combo of a neuromorphic sensor, processor and AI model gives LENS its low-power superpower.

“Radically new, power-efficient solutions for… place recognition are needed, like LENS,” Sandamirskaya says.

🔗 Adapted from: ScienceNews
https://www.sciencenews.org/article/robot-eye-artificial-intelligence-ai [Accessed on 14th July 2025].

🟨 QUESTÃO 38.

What type of information does the LENS system prioritize for localization?

🄰 Complete static snapshots of the environment.

🄱 Raw color data from every pixel.

🄲 Continuous spatial mapping.

🄳 Event-driven signals generating features such as edges.

🄴 Motion blur patterns for direction estimation.

Gabarito: 🄳

🧭 1️⃣ Leitura orientada

O texto contrasta explicitamente o funcionamento de câmeras tradicionais com o sensor Speck. O foco não é capturar tudo, mas apenas mudanças relevantes no ambiente.

📝 2️⃣ Análise técnica das alternativas

(A) ❌ Incorreta.
🚩 Pegadinha: descreve o funcionamento de câmeras convencionais.

(B) ❌ Incorreta.
O texto afirma que o sistema evita grandes volumes de dados visuais.
🚩 Pegadinha: confundir pixel com relevância.

(C) ❌ Incorreta.
Não há mapeamento contínuo tradicional, mas resposta a eventos.
🚩 Pegadinha: termo genérico de robótica.

(D) ✅ Correta.
O sensor “acorda” apenas quando detecta mudanças de luminosidade, capturando edges e estruturas-chave.
🚩 Pegadinha evitada: leitura literal de “vision system”.

(E) ❌ Incorreta.
Não há menção a borrões de movimento ou estimativa direcional.
🚩 Pegadinha: extrapolação técnica.

⚠️ 3️⃣ Armadilhas clássicas do IME

• Confundir câmera tradicional com sensor neuromórfico
• Ignorar o conceito de event-driven vision
• Supervalorizar termos técnicos fora do texto

🧠 4️⃣ Resumo B3GE™ Master

✔ LENS não captura tudo, apenas mudanças relevantes.
✔ Foco em bordas e eventos visuais.
✔ Eficiência energética vem do processamento seletivo.

🔎 Gabarito confirmado: (D)