Liked http://nytimes.com/2024/09/22/opinion/israel-pager-attacks-supply-chain.html (nytimes.com)

So what happens now? As for Hezbollah, its leaders and operatives will no longer be able to trust equipment connected to a network — very likely one of the primary goals of the attacks. And the world will have to wait to see if there are any long-term effects of this attack and how the group will respond.

But now that the line has been crossed, other countries will almost certainly start to consider this sort of tactic as within bounds. It could be deployed against a military during a war or against civilians in the run-up to a war. And developed countries like the United States will be especially vulnerable, simply because of the sheer number of vulnerable devices we have.

Source: Israel’s Pager Attacks Have Changed the Worldby Bruce Schneier

Bookmarked Hackers Used to Be Humans. Soon, AIs Will Hack Humanity by BRUCE SCHNEIER (WIRED)

AIs don’t solve problems like humans do. They look at more types of solutions than us. They’ll go down complex paths that we haven’t considered. This can be an issue because of something called the explainability problem. Modern AI systems are essentially black boxes. Data goes in one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code.

Bruce Schneier summarises the findings in his new report on AI and hacking.

AIs can engage in something called reward hacking. Because AIs don’t solve problems in the same way people do, they will invariably stumble on solutions we humans might never have anticipated—and some will subvert the intent of the system. That’s because AIs don’t think in terms of the implications, context, norms, and values we humans share and take for granted. This reward hacking involves achieving a goal but in a way the AI’s designers neither wanted nor intended.

Take a soccer simulation where an AI figured out that if it kicked the ball out of bounds, the goalie would have to throw the ball in and leave the goal undefended. Or another simulation, where an AI figured out that instead of running, it could make itself tall enough to cross a distant finish line by falling over it. Or the robot vacuum cleaner that instead of learning to not bump into things, it learned to drive backwards, where there were no sensors telling it it was bumping into things. If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to an acceptable solution as defined by the rules, then AIs will find these hacks.

Schneier gives the example of Volkswagen’s design to trick to check on emissions control tests. Although this was not done by AI, Schneier raises the concern about what happens when such decisions are made within blackboxes.

This is interesting to consider alongside Kate Crawford’s discussion of the human side of AI.

Bookmarked Opinion | We’re Banning Facial Recognition. We’re Missing the Point. (nytimes.com)

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations — and what sorts of influence we want them to have over our lives.

Bruce Schneier argues that simply banning facial recognition is far too simplistic.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

As Cory Doctorow summarises,

Schneier says that we need to regulate more than facial recognition, we need to regulate recognition itself — and the data-brokers whose data-sets are used to map recognition data to peoples’ identities.