Apple disables ADP in the UK market | Copilot exposing private GitHub pages
#177 - Rather than build a backdoor, Apple chooses to disable encryption | One an AI learns, you cannot get it to unlearn
Building a backdoor vs. disabling encryption !?!
Apple chooses the lesser of the two evils.
There is something very, very attractive about digital surveillance. It gives sweeping powers with limited requirement of resources. The individual you are surveilling is more likely than not to never come to know of it.
Last month, the UK government, using its 2016 Investigatory Powers Act, a highly controversial act, to instruct Apple to build a backdoor into iCloud. Read the analysis by Bruce Schneier here. Or read this post on The Register.
I have spoken on the popular arguments against privacy often. There is a chapter about data privacy that deals with these arguments in my book - Monkey Shakespeare Typewriter: Cybersecurity for Everyone. The arguments are of two types - safety (national safety and people safety) and “if you have nothing to hide, why do you need encryption”.
If the disabling of ADP is really implemented, it is a major setback for privacy and security. Data should always be encrypted when stored. Unencrypted data will increase an attack vector that you would rather not have.
Take Action:
If you are in the UK, speak up about this. You should have the right to encrypt your data, just like you should have the right to lock your door at night.
Turn on Advanced Data Protection manually for your Apple devices. Use this link to understand how to do it.
When an AI learns something, it learns it
It’s very difficult to get AI to forget that it learnt
If I ask you think of a white elephant, you will. Then when I ask you not to think of a white elephant, what will you do?
That’s exactly what AI does.
Read this post to understand how Github repos that were public even for a very short while have be cached and are now available on CoPilot. Even after taking the repos private, the code is available with CoPilot.
As cybersecurity professionals say, there are various risks of using AI haphazardly.
Take Action:
Consider implementing a framework for AI risk management - it could be ISO 42001 or AI-RMF by NIST.
Create an AI policy and roll it out in your organization. This should include everything from the use of GenAI independently (Shadow AI) as well as tools your organization is creating using AI and official tools you allow. You might have to create awareness around these risks.