True, that’s the other side of the discussion, which I tried sidestepping for the sake of understanding the technical side - eg. the difference between what was claimed, what the actual mechanism is, and what the practical difference between them is. And only after that, what the privacy / control implications could be.

No matter whether you “trust” a company or not, the point does stand that some have the capability to remotely revoke a certificate on the fly and stop any signed, already installed apps from (easily) working on your computer. Some don’t. Whether that capability will ever be abused or not, whether the information about which developers’ apps you have installed and use is of importance to anyone who doesn’t need to know, and whether the capability can be used for the good of the common user (security implications), it’s still a question of principle for many people.

(Reminds me when Amazon sold certain Kindle books they realized had copyright / ownership issues, and then simply remotely deleted the books from users’ devices and returned their money - which unsurprisingly didn’t make everyone happy. One can argue that people only license the content for their use, but that doesn’t change the fact that something like that usually seems to feel like “someone took away a book I owned” rather than “someone revoked my license to content I purchased”.)

On one hand, I’m kind of undecided on the whole control issue and what’s enough / too much. On the other hand - I suppose there is a reason why I moved to a Linux system after 15 years of Apple at home, and currently trying to get rid of most of online services belonging to companies that are just a bit scary and large to my liking. It’s not all black & white in a “screw Apple, Google and evil social media companies”, but there are things that worry me about the direction these things are going to.

(Edit: I guess my point is that while I’m very interested in the subject and very worried about some things I’ve been witnessing for the past 15 years as a computer user and a software engineer / random tinkerer, I’m still kinda undecided on where to draw the line, and hence tend to err on the neutral side of things - knowing that’s a stance in itself…)

2 Likes

Apple released a statement, and it sounds like the incident is prompting some change:

7 Likes

*A new encrypted protocol for Developer ID certificate revocation checks
*Strong protections against server failure
*A new preference for users to opt out of these security protections

Sounds like they’re promising to address all the major issues raised (both the ability to eavesdrop, and the “ownership” side of things), that’s good!

11 Likes

Yes. make a small edit to your hosts file:

add the line

127.0.0.1 ocsp.apple.com

somewhere

I also now use https://objective-see.com/products/lulu.html

3 Likes

If you really hate Apple validating your software, you can always just totally disable Gateway.

I don’t recommend it for most users, but it’s an option.

I used to be super paranoid about my data and to a point I still am (I do all the classic security stuff) but once you dive into the rabbit hole of infosec you realize there’s only so much you can do to stop data leaking from your devices and honestly I don’t have the time to constantly worry about it.

3 Likes

You’re using lulu instead of lil’ snitch?

What other stuff do you add to it to block?

Yeah, AFAIK lil’snitch isnt going to work in the future unless they do an overhaul. I block everything defacto and let it through as it tries to connect. I don’t want to out certain programs but they call home a lot for no particular reason - stuff I’ve paid for. I assume its to do updates etc but I don’t trust it. I actually block Max so that I don’t get that annoying yellow update bar up the top for example. I also block a really really useful program (again anonymous) that does correction for my speaker setup which calls home A LOT and constantly tries to update. I just want it to work the way I have it now!

1 Like

Ultimately it comes down to trust, no? Your internet service provider usually gets realtime understanding of sites you visit by virtue of providing DNS services. Same for your bank and debit/credit card issuer (who have full tabs on your spending but severely restricted access), and same for card processor(s) who have (and do use) the option to run analytics on your purchases, individual and in large groups. Can this be used against you or to help you? Where does the “directly” line go in terms of looking at group behaviour and adjusting system thresholds? Apple, like most other companies, don’t have to store the request data, they’ve announced they won’t store IP addresses, and at the moment I’m unaware of any laws that mandate having to store requests for lauching software, even on a temporary basis (unlike, for comparison, for internet access).
I still trust Apple. Just about.

Yeah, I think it’s been covered in some of the linked articles in a much better wording, but for me it boils down to:

  1. Am I willing to trust the other party with the data in the first place? And if I don’t trust them or need / want the service enough to give them that data, can I opt out easily - or more preferably, opt-in in the first place? If the service is voluntary and clearly specified as such, it’s usually not a problem anyway - the complicated questions arise when something happens without one’s knowledge or against one’s will. (Taken to extremes, of course I could stop using a credit card and build my own computer with a niche OS in it and never take it online, nobody’s forcing me to use the stuff I do, but the practical point is probably clear here…)

  2. Can I trust the other party to treat and transfer the data in a secure manner - always use TLS/HTTPS when something sensitive is transmitted, store sensitive things encrypted whenever possible, restrict access to the data properly, et cetera? I think this is ultimately what ends up constantly worrying people more strict (or paranoid) about security than me. Is the company competent and trustworthy enough that the data will only be seen / stored by the people and systems who need it, and not leaked or shared to other parties? Do they have to actively participate with a surveillance organization vs. is it data I feel uncomfortable giving to a surveillance organization? Et cetera. (I think in this specific case some people were worried about the fact that as the information is transferred unencrypted / over a non-secure connection, “anyone” can potentially eavesdrop on it, whether it makes sense or not - which is something Apple seems to have promised to change in the future)

There are a lot of cases where my answer to both of the questions is complicated and/or of “I’m not sure” variety.

(Reminds me, kind of related: I’m not sure how many here have seen the recent awful news about a large’ish Finnish psychotherapy company having most of their unencrypted patient database, including complete notes about therapy sessions, stolen - and people being blackmailed with “send me Bitcoins or I will publish all your personal data and therapy session transcripts”. Apparently this theft was greatly helped due to very shoddy / lax security practises of the said company so that it didn’t require much mad hacker skills to get the data.

I tend to specifically trust those people who aren’t faceless & nameless and do good to me, and I’d wager it’s the same with most other people - but more often than not, a large multinational company still has a better grasp of infosec and which sensitive data needs to be handled more carefully than usual, than a local company with in-house software developed by a single person who might not even be such an experienced developer. Even if I completely trusted a doctor I visited, can I trust the developers and maintainers of the patient system to be up to their tasks?)

i honestly cannot understand why we should feel compelled to not call out companies who make software that collect data. it’s just a fact— the software does this, and most people don’t have monitoring software set up to know it’s doing it. there’s a chance the phone-home is legit and helpful, so there shouldn’t be outright resistance to all data contact— it should simply be transparent. because a lot of companies are now in the collection business which (despite any of our personal preoccupation or ambivalence) is bad.*

*edited a lot of words out here. there is plenty of writing out there already on surveillance capitalism.

10 Likes

^- Also, this. If it comes as a complete surprise for most users that some kind of phoning-home been done for years without their knowledge, it’s probably worth raising an issue about. Even if it’s for a good purpose, perfectly normal way of doing things, and perfectly OK for most people once explained why it’s done (and even more so if not).

1 Like

I think what I said is being misinterpreted. I have no problem calling out surveillance practice when it is provable and evident. Yes companies could be more transparent about what they do or dont do with your data, or how they communicate outside of your machine - they sort of might tell you this when you sign a ToS or data protection policy or by virtue of living in the EU. But I’m not going to say x company is bad because their software made a phone home for a number of reasons.

  1. I don’t know if that request is malicious or not and I’m not going to insinuate either way without proof including forensic evidence of that request. Personally, software I agreed to install is liable to connect to other machines
  1. Its possible an employee for said company lurks here and becomes aware that people are blocking their requests, resulting in them changing how that is done either positively or negatively. In the current circumstances I have total control over what happens now and I’m happy with that. I don’t want that to change.

The focus of my post was rationale for blocking requests rather than supporting some kind of argument that 2 companies whose software I use, has been excellent and has been well supported over the years should for some reason be ‘outed’. Again, no proof, just my personal preferences for managing how I work with a computer.

To be transparent from my end the companies are Max, Sonarworks and Microsoft (vscode).

3 Likes

thanks for your followup— we are in agreement. i specifically stated that there’s a difference between “outing” (insinuating harm) and simply stating that some software does a lot of communication (without accusation of intent).

in fact, giving companies an opportunity to clarify what their software is doing i think helps (potentially) separate them out from the bad actors.

4 Likes

I believe we are :upside_down_face: !

1 Like

Lets not be carried away by apps phoning home analogy because what we are talking about in the context of MacOS Catalina and Big Sur is not that apps phone home, they phone to Apple. And by this giving Apple the power to approve or block the 3rd party processes inside the users machine. This has raised Apple to a very unique position where no other consumer electronics manufacturer has been. Lets stop pretending that this is some kind of industry standard procedure, because it is not, not on consumer computer market.

2 Likes

It also seems to me that the “good” that OCSP does is that if an app has already made it past the gatekeeper/security stuff on the install side but then at some point later does something dodgy to have it’s licence removed, that Apple can then stop the app from loading.

That seems like such a weird edge case to check for (approving an app, and then it “breaks bad”) that you exchange for giving up the ability to have apps being prevented from launching, remotely.

What OCSP is used for in this case is checking validity of developer certificates, not app certificates.

Eg. if you download and install an app outside of App Store that’s supposedly signed by Ableton, but is actually someone else who has gotten hold of Ableton’s developer keys - or install an app that is actually by Ableton but they’ve suddenly decided they’d like to destroy everyone’s computers - then Apple can react when they find out, revoke the developer certificate immediately, and stop (self-distributed) applications from the malignant developer from working the next time they are started.

The other way to do this would be to periodically update a list of revoked developer certificates separately from starting applications or installing them. This is also something that’s been used (possibly not by Apple), but has the drawback that the checks aren’t real time - you could happily run a malignant app until the revocation lists were updated the next time.

So basically the use it’s intended for is “we’ve found out that this developer, who is signed up for our developer plan and promised to play by rules, and has apparently signed this app with their certificate to certify it’s actually written by them, has written malware or leaked their keys so they are either no longer playing by rules or possibly not the actual party who has actually written and signed the app you are trying to start”.

As for trust / ownership implications, there’s been discussion about those during the thread already. I guess the other side is, that Apple is running a trust mechanism for the developers (developer plans & certificates), and the OCSP checks or similar are logical / necessary evil for that. If you don’t at least periodically check whether the developer certificate for company x is still valid or not, you don’t know whether you can still trust that certificate to only be in possession of company x, or trust company x to be honest. (Whatever should be done with that information is another thing, and actually, I’m not entirely sure if you can bypass code signing and certificate checks altogether, if you really want to for some reason…?)

FWIW, a very similar process happens with web browsers when you surf to HTTPS secured sites (eg. this one) - the validity of certificate chain is checked, and if something on the way has been compromised, the browser knows the certificate is revoked / invalid and displays a huge warning about an insecure site / potential security issue which will be major pain in the butt to skip for a good reason. Most browsers do specifically use the very same OCSP mechanism. I suppose it’s juts less scary and intrusive when it isn’t about installed applications but sites you navigate to, and the governing body for the whole world wide web is not a single computer / software company. But in that sense one could argue it is an industry standard procedure in a way.

6 Likes

Most browsers do specifically use the very same OCSP mechanism. I suppose it’s juts less scary and intrusive when it isn’t about installed applications but sites you navigate to, and the governing body for the whole world wide web is not a single computer / software company. But in that sense one could argue it is an industry standard procedure in a way.

Not only it’s not one single company but it also has no power to block off the non-sertified site to open nor has it the control to block your browser from starting inside your computer. Only Apple has that control over your computer. I do understand the urge to explain this insanity with familiar terms and situations but there are none. We are living in different timeline now, because of this. To quote the man: dear frog, this water is now boiling.

5 Likes

It’s worth noting that, when the service is functioning properly, this does not completely stop app launches. If we’re going to have a conversation about this, it needs to stop being sensationalized.

Apple will happily allow you to launch an app with a revoked or untrusted developer certificate, they just want to be sure you know that certificate is revoked or untrusted so that you can knowingly run something that may be malware rather than doing so unwittingly. (For the record, this is easily bypassed by right-clicking the application icon and clicking on open, then choosing to proceed when prompted with the warning.)

This isn’t about some draconian control method. It’s about keeping technologically less literate folks (like my brother, sister, or mother, or almost all of our grandparents) from unwittingly running malware on their computers. And since Apple has already stated they’re going to (1) make it easier to bypass their certificate validation process and (2) make that process more secure from snooping, I really don’t see what all the hubbub is about still.

6 Likes