Some time ago I tweeted my concerns that despite Google Android asking whether consumer would like to enable Location Sharing (opt-in) during the initial device setup, one still had to manually disable location sharing in Android's web browser - this is enabled by default even if you don't opt-in to location sharing during the initial device setup. This, in itself, is enough to stifle Google's claim about Android location sharing being Opt-In.
However, to make matters worse, when you install an Android update (as I did with Froyo and soon will again with Gingerbread) it reverts the browser setting for location sharing back to "On". This is completely unacceptable and only now (after several months) have I noticed this because one assumes that once you disable it, the setting will persist. It is only because the issue came up again today that I checked my browser settings, to find that it had been "on" for several months.
Furthermore, it has been reported that the Android Twitter Client also reverts location sharing back to "on" when it is updated.
I would advise anyone who has an Android device and does not want to disclose their location to the world and his dog, to go into any net facing apps (including browser and social networking clients) and check that location sharing is turned off.
I will be discussing this with Google over the next couple of days "suggesting" that this is dealt with before Gingerbread is released by making the default option for the browser and other apps set to disable location sharing - either that or the initial setup where one can "opt-in" to location sharing needs to persist across ALL applications including the browser and social networking apps. I will post updates when they become available.
Google have claimed to the Press and Media that the latest privacy scandal regarding their interception of Internet communications whilst sniffing out WiFi hot spots with their Streetview cars was an "accident".
They have stated that the code was being worked on for a different project and somehow managed to get inserted into the Streetview project - and frankly that doesn't wash.
Having worked on large IT projects for 15 years I have a strong understanding of the design, developement, testing and deployment cycles fo such projects, so let me explain a little how it works.
1. The Design Phase As the title suggest this phase is where the project is originally defined and designed. Normally at the beginning of this phase there would be a very high level concept design which would not include any "code" as such - its purpose would be to give management and executives a human readable outline of the design principles and purpose of the project.
Once this has been signed off by management and a project leader/manager has taken control, that design concept will be fleshed out to make it ready for the engineers - this would result in documentation still at quite a high level (human readable) with perhaps some "pseudo code" but certainly nothing more.
The output from this phase would consist of lot of reference documents, technical glossary, project plan and a lot of documents defining technical functionality and specifications - these would then become the core knowledge resources for the entire project and would be used by developers, testers and even management, throughout.
2. The Development Phase Nothing too complex in describing this phase - it is what it says on the tin. Using the design references and technical specifications the engineers would develop the code base for the project. They liase with the Designers frequently and once they have some code it goes off for testing and debugging.
3. The Testing Phase Testing and Debugging will be heavily reliant on the technical specifications and various other documents from higher up the chain. Test environments would be setup to mimic the real world and extensive testing of every single piece of code is carried out. This is one of the most important phases in any IT project and it lasts a long time. Every single byte of data which is produced by the tests is inspected to ensure that it is working as planned. It never does, at least not in the early phases of project so there is a lot of interaction between developers and testers and again a lot of interaction between developers and designers.
4. The Deployment Phase In essence once a project has been thoroughly tested and is seen as stable it will be deployed into the real world - this doesn't mean that the three previous groups become obsolete - in fact they would continue to redesign, redevelop and retest in order to add new features, remove features which are not needed and deal with bugs or unexpected behaviour which was not picked up in the labs. And believe me, these -always- manifest - I have yet to work on a large project which works as desired first time round, it simply doesn't happen. The project manager has to deal with change requests, bugs, resource issues, efficiency issues and a whole bunch of other things.
So the question is how does a piece of code "intended" for another project entirely, manage to find its way into the project without being noticed? The short answer is that it doesn't, it simply is not possible because of the very granular method in which projects are developed.
At the very worst it would have been picked up in Phase 3 (Testing) as the data coming back from the test environments would include all this "accidental" data and would be picked up by the people doing the testing. At this phase in order for it to be "rogue" code one would assume there would be no technical specifications for that code which would immediately ring alarm bells with the testers as they find they have all this data which is not defined.
Even if it was missed during the testing phase (which is incredibly unlikely) it would certainly be noticed in the data coming back during the early stages of deployment - which is always examined thoroughly - you simply cannot fail to notice all this incoming data containing the contents of Internet communications.
Furthermore, one has to assume that the size of this data (considering it has been collected for over 3 years) would be significant - probably hundreds of terabytes - that all has to be stored somewhere and believe me when I say Database and System Administrators know their systems very well indeed, it is their job to know what is in their systems and why it is there - they need to know this to keep on top of resources, manage access control and backups - you can't store all this extra data accidentally, it takes physical space, money and real man hours to manage it.
So do I trust Google when they say it was accident? Absolutely not - they knew they had the data, they knew where and what that data was and they stockpiled it for 3 years - and it is likely they would have continued to do so had Germany not demanded to know what data they were collecting.
Google may well be able to pull the wool over the eyes of regulators, press, media and the general public - but anyone who has worked professionally on large IT projects knows full well that this was no accident - it just doesn't happen that way.