t3h, unless I’m mistaken in interpreting the conversation - I think the point was, that you were quoting an article that is not written by Jeffrey Paul but someone else, whereas mdoudoroff was referring to the article linked by tehn in the initial post of this thread, which was written by said Jeffrey Paul. So you are talking about two different articles, both mentioned in this thread.

As far as I can understand the technical discussion based on those articles:

  • What is sent unencrypted on opening an application is not an unique application id / hash, but an unique developer ID / public part of developer certificate, which is something that “everyone” knows in any case. So Apple, and anyone capable of monitoring and logging your network traffic does in fact not see “this guy just opened Live” but “this guy just opened an application by Ableton”.
  • Jeffrey Paul’s main point is not necessarily that Apple is spying on you (although he does assume some level of malice which is debatable) – rather, as long as the information is sent unencrypted over plain HTTP, anyone can “spy on you”, or see which applications you open and when. The correction in the other article changes this to “anyone can see which developers’ applications you open and when”.

None of these points changes the fact that the original intention is (add “at least on paper” for those who are doubtful) checking that developers’ certificates are still valid on the fly, in case a signed app gets caught doing nasty things to your computer and Apple finds out & revokes the certificate, and the main technical problem could be remedied simply by using HTTPS - but does this otherwise sound roughly correct or am I misinterpreting / misreading something?

(Please try not to read any claims between the lines regarding whether any of this is a good or a bad thing - I’m simply trying to figure out whether I understood the original claim and the corrections correctly…)

7 Likes

addition to the technical discussion (and to privacy/surveillance discussion) there is also a wider question of ownership that Jeffrey Paul was referring to in the title of his piece: Your Computer Isn’t Yours. This is referring to aggressive strategy of Apple on restricting how their devices can be used, repaired, what software can be used in what circumstances. We have had similar discussion about the implementation of T2 chip in Apple computers that basically act as gatekeeper making sure that only certain parts can be used in repairing/building the devices. As we all know, this discussion went nowhere because of… you guessed it… the SECURITY argument. When we are talking about T2 or OCSP or any other crap like this, what we are talking about really is the limits of control. How much control we are giving to the manufacturer over our devices and trough this over our daily lives?

2 Likes

True, that’s the other side of the discussion, which I tried sidestepping for the sake of understanding the technical side - eg. the difference between what was claimed, what the actual mechanism is, and what the practical difference between them is. And only after that, what the privacy / control implications could be.

No matter whether you “trust” a company or not, the point does stand that some have the capability to remotely revoke a certificate on the fly and stop any signed, already installed apps from (easily) working on your computer. Some don’t. Whether that capability will ever be abused or not, whether the information about which developers’ apps you have installed and use is of importance to anyone who doesn’t need to know, and whether the capability can be used for the good of the common user (security implications), it’s still a question of principle for many people.

(Reminds me when Amazon sold certain Kindle books they realized had copyright / ownership issues, and then simply remotely deleted the books from users’ devices and returned their money - which unsurprisingly didn’t make everyone happy. One can argue that people only license the content for their use, but that doesn’t change the fact that something like that usually seems to feel like “someone took away a book I owned” rather than “someone revoked my license to content I purchased”.)

On one hand, I’m kind of undecided on the whole control issue and what’s enough / too much. On the other hand - I suppose there is a reason why I moved to a Linux system after 15 years of Apple at home, and currently trying to get rid of most of online services belonging to companies that are just a bit scary and large to my liking. It’s not all black & white in a “screw Apple, Google and evil social media companies”, but there are things that worry me about the direction these things are going to.

(Edit: I guess my point is that while I’m very interested in the subject and very worried about some things I’ve been witnessing for the past 15 years as a computer user and a software engineer / random tinkerer, I’m still kinda undecided on where to draw the line, and hence tend to err on the neutral side of things - knowing that’s a stance in itself…)

2 Likes

Apple released a statement, and it sounds like the incident is prompting some change:

7 Likes

*A new encrypted protocol for Developer ID certificate revocation checks
*Strong protections against server failure
*A new preference for users to opt out of these security protections

Sounds like they’re promising to address all the major issues raised (both the ability to eavesdrop, and the “ownership” side of things), that’s good!

11 Likes

Yes. make a small edit to your hosts file:

add the line

127.0.0.1 ocsp.apple.com

somewhere

I also now use https://objective-see.com/products/lulu.html

3 Likes

If you really hate Apple validating your software, you can always just totally disable Gateway.

I don’t recommend it for most users, but it’s an option.

I used to be super paranoid about my data and to a point I still am (I do all the classic security stuff) but once you dive into the rabbit hole of infosec you realize there’s only so much you can do to stop data leaking from your devices and honestly I don’t have the time to constantly worry about it.

3 Likes

You’re using lulu instead of lil’ snitch?

What other stuff do you add to it to block?

Yeah, AFAIK lil’snitch isnt going to work in the future unless they do an overhaul. I block everything defacto and let it through as it tries to connect. I don’t want to out certain programs but they call home a lot for no particular reason - stuff I’ve paid for. I assume its to do updates etc but I don’t trust it. I actually block Max so that I don’t get that annoying yellow update bar up the top for example. I also block a really really useful program (again anonymous) that does correction for my speaker setup which calls home A LOT and constantly tries to update. I just want it to work the way I have it now!

1 Like

Ultimately it comes down to trust, no? Your internet service provider usually gets realtime understanding of sites you visit by virtue of providing DNS services. Same for your bank and debit/credit card issuer (who have full tabs on your spending but severely restricted access), and same for card processor(s) who have (and do use) the option to run analytics on your purchases, individual and in large groups. Can this be used against you or to help you? Where does the “directly” line go in terms of looking at group behaviour and adjusting system thresholds? Apple, like most other companies, don’t have to store the request data, they’ve announced they won’t store IP addresses, and at the moment I’m unaware of any laws that mandate having to store requests for lauching software, even on a temporary basis (unlike, for comparison, for internet access).
I still trust Apple. Just about.

Yeah, I think it’s been covered in some of the linked articles in a much better wording, but for me it boils down to:

  1. Am I willing to trust the other party with the data in the first place? And if I don’t trust them or need / want the service enough to give them that data, can I opt out easily - or more preferably, opt-in in the first place? If the service is voluntary and clearly specified as such, it’s usually not a problem anyway - the complicated questions arise when something happens without one’s knowledge or against one’s will. (Taken to extremes, of course I could stop using a credit card and build my own computer with a niche OS in it and never take it online, nobody’s forcing me to use the stuff I do, but the practical point is probably clear here…)

  2. Can I trust the other party to treat and transfer the data in a secure manner - always use TLS/HTTPS when something sensitive is transmitted, store sensitive things encrypted whenever possible, restrict access to the data properly, et cetera? I think this is ultimately what ends up constantly worrying people more strict (or paranoid) about security than me. Is the company competent and trustworthy enough that the data will only be seen / stored by the people and systems who need it, and not leaked or shared to other parties? Do they have to actively participate with a surveillance organization vs. is it data I feel uncomfortable giving to a surveillance organization? Et cetera. (I think in this specific case some people were worried about the fact that as the information is transferred unencrypted / over a non-secure connection, “anyone” can potentially eavesdrop on it, whether it makes sense or not - which is something Apple seems to have promised to change in the future)

There are a lot of cases where my answer to both of the questions is complicated and/or of “I’m not sure” variety.

(Reminds me, kind of related: I’m not sure how many here have seen the recent awful news about a large’ish Finnish psychotherapy company having most of their unencrypted patient database, including complete notes about therapy sessions, stolen - and people being blackmailed with “send me Bitcoins or I will publish all your personal data and therapy session transcripts”. Apparently this theft was greatly helped due to very shoddy / lax security practises of the said company so that it didn’t require much mad hacker skills to get the data.

I tend to specifically trust those people who aren’t faceless & nameless and do good to me, and I’d wager it’s the same with most other people - but more often than not, a large multinational company still has a better grasp of infosec and which sensitive data needs to be handled more carefully than usual, than a local company with in-house software developed by a single person who might not even be such an experienced developer. Even if I completely trusted a doctor I visited, can I trust the developers and maintainers of the patient system to be up to their tasks?)

i honestly cannot understand why we should feel compelled to not call out companies who make software that collect data. it’s just a fact— the software does this, and most people don’t have monitoring software set up to know it’s doing it. there’s a chance the phone-home is legit and helpful, so there shouldn’t be outright resistance to all data contact— it should simply be transparent. because a lot of companies are now in the collection business which (despite any of our personal preoccupation or ambivalence) is bad.*

*edited a lot of words out here. there is plenty of writing out there already on surveillance capitalism.

10 Likes

^- Also, this. If it comes as a complete surprise for most users that some kind of phoning-home been done for years without their knowledge, it’s probably worth raising an issue about. Even if it’s for a good purpose, perfectly normal way of doing things, and perfectly OK for most people once explained why it’s done (and even more so if not).

1 Like

I think what I said is being misinterpreted. I have no problem calling out surveillance practice when it is provable and evident. Yes companies could be more transparent about what they do or dont do with your data, or how they communicate outside of your machine - they sort of might tell you this when you sign a ToS or data protection policy or by virtue of living in the EU. But I’m not going to say x company is bad because their software made a phone home for a number of reasons.

  1. I don’t know if that request is malicious or not and I’m not going to insinuate either way without proof including forensic evidence of that request. Personally, software I agreed to install is liable to connect to other machines
  1. Its possible an employee for said company lurks here and becomes aware that people are blocking their requests, resulting in them changing how that is done either positively or negatively. In the current circumstances I have total control over what happens now and I’m happy with that. I don’t want that to change.

The focus of my post was rationale for blocking requests rather than supporting some kind of argument that 2 companies whose software I use, has been excellent and has been well supported over the years should for some reason be ‘outed’. Again, no proof, just my personal preferences for managing how I work with a computer.

To be transparent from my end the companies are Max, Sonarworks and Microsoft (vscode).

3 Likes

thanks for your followup— we are in agreement. i specifically stated that there’s a difference between “outing” (insinuating harm) and simply stating that some software does a lot of communication (without accusation of intent).

in fact, giving companies an opportunity to clarify what their software is doing i think helps (potentially) separate them out from the bad actors.

4 Likes

I believe we are :upside_down_face: !

1 Like

Lets not be carried away by apps phoning home analogy because what we are talking about in the context of MacOS Catalina and Big Sur is not that apps phone home, they phone to Apple. And by this giving Apple the power to approve or block the 3rd party processes inside the users machine. This has raised Apple to a very unique position where no other consumer electronics manufacturer has been. Lets stop pretending that this is some kind of industry standard procedure, because it is not, not on consumer computer market.

2 Likes

It also seems to me that the “good” that OCSP does is that if an app has already made it past the gatekeeper/security stuff on the install side but then at some point later does something dodgy to have it’s licence removed, that Apple can then stop the app from loading.

That seems like such a weird edge case to check for (approving an app, and then it “breaks bad”) that you exchange for giving up the ability to have apps being prevented from launching, remotely.