Large software companies today have rejected the classic engineering idea of "do it right the first time." They have instead adopted the theme "release on Monday, patch on Tuesday." Many software developers sadly release programs that aren't totally finished and haven't been adequately tested. They expect to release updates frequently, and they expect users to put up with problems until the patches arrive.
The modern software development lifecycle gives excuses for developers to release half-finished software and repair it later. The cycle assumes the release of a program isn't the end of programming it. It will require continual maintenance.
Unfortunately, most developers on modern platforms can't avoid having to modify their software. A new OS might require the programmer to change code and rebuild it. A new protocol might become standard. New hardware with a higher screen resolution may require changing the graphical interface. Software today never feels totally finished.
When people buy a new game today, do they really want to wait for a large patch to download and install before they can play it? Why wasn't the game made correctly in the first place? Furthermore, if the patch is half the size of the game, you can conclude that very little was done properly in the first release. Developing for consoles today is plagued with a frantic rush to get games out and expect to patch them many times.
In the classic days of video game development, companies had to send binary files to production that were solidly built. A company that burned maybe tens of thousands of ROM chips that contained software bugs would face financial ruin. Just one game sent to production with a major bug would probably mean the end of the company. Spontaneous software updates weren't possible, and making bad cartridges meant the destruction of much material. So companies were disciplined to produce quality games the first time.
In the days of the Atari 2600 or NES, a very small team of people could create a game. In fact, just one person could have created an entire game. Today, however, developing games for modern computers is more cumbersome. A company needs an army of artists, level designers, people making 3D meshes, and musicians. I feel the emphasis has gone from programming an elegant engine to primarily visual effects today. Simple games are often better. The concept of a game should be most important, and graphics should be secondary.
First of all, I develop apps for Android. The user base is massive. A developer could easily reach a large audience over the entire world. News of a good app can spread quickly. The entire trend of computers is going mobile, and the large desktops and laptops are quickly vanishing. Programmers developing modern software will have to develop for at least one mobile platform in the near future. Developing costs are cheap, and programmers don't need to worry about physically packaging software and manuals. I am an advocate for open software in general, and I appreciate that the Android OS and all development tools are free. A developer could use any OS (Ubuntu, Solaris, MacOS, etc.) to develop for Android. I also like the fact that Android is Unix/Linux-based.
Like most people, I was glad to see the demise of Windows and second-rate personal computers from companies like Dell and HP. However, I didn't want Windows replaced with a new OS that is equally vulnerable in terms of security and with equal incompetence from the developers of the OS. What Microsoft did to so many users' computers over such a long period of time was borderline criminal. Unfortunately, Android is the new Windows.
Microsoft was largely responsible for the hardware churn in the PC market. A consumer bought a new computer, paid for a Windows license, and then Windows gradually made the user's machine so slow and unusable that the consumer had to buy another Windows machine. Consumers just didn't learn. Most of the computers thrown away weren't really junk. Their hardware components were engineered to last decades. If one part failed, a user could generally open the computer and replace just that part. The Windows operating systems were largely to blame for a computer superficially appearing to have failed. Of course the hardware suppliers also didn't see a problem with the buy-and-toss cycle, as they made more money the faster a customer discarded their hardware. Unfortunately, the hardware churn continues with mobile devices with the average user throwing away a smartphone after just 18 months.
I don't feel other mobile platforms like iOS are any better overall than Android. Apple provides better security, and decompiling an Apple app is almost impossible. I also believe C/C++/Objective C are superior languages over Java. However, I've heard horror stories of developers spending significant time and money making an app that Apple rejects in the end. A developer doesn't know if Apple will approve an app until it's complete and submitted to Apple. Apple has a screening process for apps that is too strict. A developer can argue a rejection, but Apple often doesn't listen.
Perhaps most importantly, I have a problem with Apple depending on slave labor for profit. One of Apple's suppliers placed suicide nets around its buildings and essentially forced people to make Apple's junk. Workers didn't seem free to leave, and they were restricted to barracks. Apple claims to be socially conscious, yet it makes money by exploiting weak human rights laws in second- and third-world countries. Apple and its customers seem to think they're doing the Chinese a favor by giving them jobs, but their "employment" would qualify as slavery or exploitation in America. Unfortunately, so many electronics companies follow the route of Apple.
I have never programmed for a more frustrating platform than Android. So little works the first time. The programmer spends long hours on problems that the Android SDK should have cleanly solved. Programming for Android is the first and only time I have used extreme profanity in my source code comments.
The Android architecture quite frankly is messed up. Developing for Android is filled with non-intuitive quirks that totally catch the programmer off guard. The beginner's app will throw more exceptions than Windows Vista. Android looks amateurish. There are so many places where Android basically says, "Didn't you know thing A has to happen before thing B?" "No! How was I supposed to know that? That's totally non-intuitive! That doesn't make any sense! Was I supposed to read and memorize all the Android documentation (which changes continually) before starting programming?" The otherwise experienced programmer starting new in Android would have no idea. Nothing makes sense, and everything is non-intuitive. There are so many obscure facets about Android development that you have to know for your app to work properly.
My biggest frustration early on was properly handling configuration changes, like rotating the device. Why should a rotation destroy an Activity? Just because the widgets are resizing and repositioning? MFC/Windows or Qt seemed to handle resizing windows just fine without destroying state. The widgets and layouts should simply adjust automatically with the rest of your Activity's state remaining intact. Even worse, Android requires the programmer to reestablish/maintain the state (for which there are many methods and nothing is standard). This process is slow on Android. It's strange, unexpected things like this that make Android development a true joy. No beginner developer would expect this. The programmer can do hackish things like disable screen rotation or make a rotation callback, but again, why can't the rotation simply happen with Views getting resized appropriately and without the Activity totally restarting? These problems make Android feel like it was slapped together without Google properly thinking through the architecture.
The Activity lifecycle's architecture is fundamentally flawed. There are no guarantees which lifecycle callback gets called and when. This uncertainty especially applies to the "shutdown" callbacks onPause(), onStop(), and onDestroy(). The calls can vary by device. Android encourages the programmer to put state in an Activity sub-class, but that state gets destroyed and needs to be re-created upon simple configuration changes like rotating the device.
Since the Android SDK lacks basic functionality in certain areas, the programmer is forced to get around problems with ad-hoc means. So many so-called solutions just seem hackish. Often, Android doesn't have a recommended way of doing something that should be standard in a GUI library. How many times have I had to go to Stack Overflow and read solutions where the authors admit, "This is hackish, but I can't find a better way"? Standard functions and classes that worked a long time become deprecated quickly, and the programmer doesn't know if using them is still acceptable.
A great example of ad-hoc solutions arises when handling configuration changes. How many different ways can the programmer handle a common device rotation? The programmer could fix the display to landscape or portrait mode. Or, a callback could catch the rotation. Or, the app could allow the Activity to be destroyed but save and restore state in the Activity. Or, the programmer could sub-class Application and create a more permanent base for state.
Android devices come in too many shapes and sizes. There are many, many hardware manufacturers. There are many brands. Almost each type of device could have unique cameras, microphones, speakers, screen resolutions, screen sizes, processors, and variants of the Android OS. Although freedom to hardware developers is good, it creates problems when testing apps.
The need to test isn't helped by the horrible emulators available for Android. For a long time, the only emulator available for Android was through its Eclipse plugin. But how long did it take to simply start the emulator? Maybe 15 minutes? Starting an app was slow. Emulators couldn't mimick all the sensors. Even if you test for many simulated hardware configurations, you won't be able to test for every variant that exists in the field. Some users will have trouble using your app on a real device that you just couldn't test. Your app can't keep up with new devices coming out.
You know a platform has a problem when conditional statements in code need to check if the user is running version X of Android. This makes programming a total slop.
On the surface, using Java for modern app development seems logical. Java took object-oriented programming to the extreme, and it did away with operations considered "unsafe" in C/C++. However, a vast amount of executable code that runs in apps is actually native. Many third-party library developers simply wrap old C/C++ code with Java wrappers. In fact, I haven't used a third-party library that doesn't wrap C/C++ code. So what's the point of demanding that programmers create apps in Java when programmers can link-in C++ shared objects?
I've noticed that source code written in Java becomes very large very fast. Even the simplest app requires thousands of lines of Java. Everything must be a class. So much code is required for all the callbacks.
The Java exceptions are truly annoying. Many functions in the Java standard library can throw a variety of exceptions. Android's SDK adds more exceptions. During development, a programmer unfamiliar with a class will undoubtedly face numerous crashes due to exceptions. Although exceptions are important during testing, should improve stability of software, and help to highlight flaws in your code, the end-users will also encounter them frequently. Many exceptions seem unwarranted. For the programmer, source code becomes bloated with try-catch blocks.
Despite using "safe" Java, apps can still be unstable. They crash with various exceptions. Devices still freeze. A great amount of C/C++ code is underneath the typical app, which Java can't safeguard.
Java doesn't (and shouldn't) trust user software, so it sandboxes it and separates it from the real OS. The sandbox makes it harder for the programmer to know what command is really a system call. The sandbox is an extra, unnecessary layer of complication. The JRE slows an app's execution. The sandbox can limit real, physical resources like RAM. Even if plenty of physical memory is available, the JRE might not give it to an app. I feel that better protection mechanisms exist than the JRE, such as rings of protection and privileged bits. The sandbox mechanism doesn't make up for all the other security vulnerabilities in Android.
One of my biggest problems with Android is the lack of security. Google basically ignored all the computer/cyber security knowledge from the last few decades. The security is so bad that I have to wonder if Google purposely left the system open to hacking and spying. A malicious app can basically have free reign over a user's device.
An amateur hacker can easily decompile an Android app. Many decompiling tools are freely available. A hacker can almost perfectly reconstruct an app's source code, minus of course comments and primitive variables. All the class names, variable names for objects on the heap, and function names are totally visible in reconstructed source. Many programs contain IP, which a competitor could easily uncover.
Protection of secondary storage is weak. The typical app can read/write from most areas of the file system with few restrictions. One app can read data that another app created. Although Android strangely treats each app as a unique "user," files created on secondary storage aren't private.
The permissions in Android are totally meaningless. When a user installs an app from the Play Store, he/she simply gives the app whatever permissions it listed. "The app wants to write to secondary storage, read my contacts, know my location, and use my camera? Ok! I have no idea what the app will really do behind the scenes, but I need this app." Google recently changed its presentation of permissions from install-time to run-time, but the typical user accepts them all anyways.
A poorly written (but not malicious) app could actually have an attack vector unknown to developers that could compromise a user's entire device.
The typical user's Android device can become bogged down with ads. The "ad API" within the Android SDK feels more like an enabler of adware. So many apps are littered with annoying ads, and avoiding tapping them is difficult. A common complaint of Windows is now common on Android.
The implications of security flaws are far more significant today that maybe during the times of Windows 95 or XP. Today, most smartphones track their users and are covered in sensors. A hacker could potentially observe a user's movements via GPS or turn on microphones or cameras at will. People tend to store more personal stuff on smartphones like phone numbers.
Users are often misinformed about how secure their apps really are. While studying the security, I ran into a very poor app that claimed to offer privacy by hiding their pictures. The app simply created a "hidden" directory with a dot in front of the directory's name. The typical file manager-like app wouldn't show the folder. However, anyone who shelled-in to the device could see the folder (e.g., with ls -a). Of course the app had millions of downloads and overwhelmingly positive reviews. Likewise, many apps that claim to manage a user's passwords are equally horrible and simply hide them using "security through obscurity."
The Play Store is filled with garbage. I appreciate Google allowing anyone to publish an app to the Play Store, but many apps just don't work. How many apps do I need to install from the Play Store before finding one that does what I want? So many apps have a poor interface. So many apps crash. The install-try-uninstall cycle is frustrating.
Many apps amount to nothing more than adware and possibly spyware.
Android changes frequently, and Google often doesn't communicate these changes clearly to developers.
A programmer may have created a simple app that ran fine for years, but then Android version X breaks it. As soon as a new version of the OS is available, the programmer needs to test all his/her apps to make sure they're compatible with the new OS. This can be a frustrating process if the programmer has many apps and needs to modify old source code untouched for years. At least Microsoft understood the need for backwards compatibility.
For a long time, Eclipse was the primary IDE to use for Android development. Eclipse was difficult to use. There are too many configurations. Updating the SDK etc. was awkward. Far too often, my combination of Eclipse and the Android SDKs would just become unusable, and I would need to reinstall Eclipse.
Google strangely tried to copy Eclipse when it made Android Studio, but they made their new IDE even more difficult to use. There are so many configurations. There are too many options. Gradle is horribly confusing.
One reason I disliked Windows was all the software updates. Microsoft constantly sent out security critical patches. Windows installed updates on almost a daily basis, and they usually required system restarts and then long installation/configuration periods. Why didn't Microsoft just program their stuff right to begin with? The same was true for so many third-party developers for Windows who just assumed customers would like getting half-finished software and then patch it later. Unfortunately, this pattern continues with Android and its apps.