As we have seen with the library’s deployment of the Koha library management system, the free software movement has emerged beyond the wishful thinking of its early champions to become a viable competitor in numerous information technology endeavors. Indeed, it is our contention that the economic trend does not favor traditional commercial software as the principal solution of organizational IT. This trend is not hard to understand in terms of consumer demand, where free solutions are preferable even when they are not as good as commercial alternatives, so long as they are good enough. But consumer demand in itself would not endanger the proprietary model were it not for the strong incentives among software developers, themselves, to give away their code.To appreciate the incentives at work in current software development we must bear in mind the need for an enterprise to focus on its core business and outsource the rest on the most favorable terms. In the past, the proprietary, commercial code vendors saved their customers money by automating manual routines with off-the-shelf software that could be mass-produced far less expensively than could be accomplished with in-house programming. They also provided a certain peace of mind by offering a product that was standardized, at least among the customer base, which created an incentive for businesses to buy what every one else was buying, thus reinforcing the code proprietor’s standard.
Eventually, the success of the shrink-wrapped software solution exposed its achilles heal, for the software company profiting from the sale of multiple iterations of its standard code found it increasingly challenging to accommodate the exceptions and local configurability demanded by customers with similar but not absolutely identical needs. Different institutional work flows, organizational structures, legal requirements and commercial relationships required linking applications and exchanging data in ways that no single software vendor could possibly anticipate, which forced institutions either to purchase customized code from the vendor (and hope the customization would still function after the next upgrade), lobby with company developers and the customer user group for enhancements in the next release (and hope the majority of customers supported the enhancements), or muddle along with the status quo (and give up hope of further automation in that particular realm).
The challenge to this economic model came when wide-spread use of the internet allowed the sharing of code which was written not to be sold, but to solve particular problems in particular institutions. The code was posted and left “open” for all to use with the caveat YMMV –“your mileage may vary”– in explicit recognition that local circumstances might necessitate adaptation on the part of the user. In time, the internet was host to vast libraries of open code contributed by thousands of programmers. Volunteer communities and standards bodies evolved which facilitated even greater sharing of solutions. In large sectors of the IT economy, programmer talent was lured from the profits of proprietary code sales to the rapid development and deployment environment offered by what came to be known as the Open Source movement. For network and server administrators struggling with tight budgets, the growth of mature applications that rivalled and even surpassed the reliability, scalability and integrability of commercial equivalents made the switch to Open Source an increasingly attractive option.
While chaos would seem to have been an inherent risk in this world-wide openness, the reality has been that the development community rapidly sorted winners from losers and built upon what was easiest to integrate with previous or ongoing developments. In this way, Open Source not only inspired innovation, but standardization, a benefit to IT maintenance equally as important as the significantly reduced price of software. This revolution has occurred so rapidly that many institutional managers above the level of day-to-day IT work are unaware of the low-cost — often free — options ripe for the taking.
It is funded directly or indirectly as a cost-center item by the companies that need it. Those companies need a great deal of cost-center, non-differentiating software. They are willing to invest in its creation through the Open Source paradigm because it allows them to spend less on their cost centers by distributing the cost and risk among many collaborators, and makes more efficient use of their software dollar than the retail paradigm.
This “retail paradigm” is the prism which too often distorts the managerial view of software acquisition. Word processors, spreadsheet programs, image manipulators, even database applications and web servers, tend to be evaluated as end-user commodities, to be paid for and consumed in the same manner as electronic books or music in digital format. But with the exception of the vendors who profit by the retail model, there is no reason for anyone else to think of such programs as “end use” consumables instead of as tools that help them in their work, as means to an end. Users would ordinarily choose free software if they were not under the impression that the programs they pay for are going to be better supported and have a rapid release of useful new features. After all, at least some of the money that they pay is expected to go back into product maintenance and development.But unlike physically-tangible goods, software, once created, can be replicated infinitely with no further production cost, and can be distributed at almost no cost over the internet. Under the retail paradigm, the price of software is essentially arbitrary once production costs have been recouped. In the Open Source paradigm, developers are not losing profits by forgoing sales. In the first place, the expense of writing code is not viewed as an investment for which a return must be justified, but as the cost of solving problems pursuant to the institution’s main business. In the second place, there is an entire community of developers in other institutions willing and ready to extend the code’s functionality and adapt it to more uses than its originators could have envisioned. In the words of Network World’s Stephen Walli, “The value gained by each contributor is enormous when compared to the cost of contributing.” (“Open source: nobody is working for free,” 2 Sept. 2010).
And here is why the long-term economic trend is working against the proprietary code model: Developing in corporate isolation, or collaborating through subcontracted, non-disclosure arrangements, the goal of the for-profit code writer is differentiation in a market where differentiation adds little or no value, and can in fact be a liability. Customers will not particularly care about bells and whistles if the basic functionality that helps them in their work can be had for free. The open source alternative not only suffices, but the open source community will respond more quickly and more accurately when the market requires innovation and variation, because of the commonly held asset of open code which can be instantly leveraged by thousands of programmers for thousands of different purposes. In that community even novice developers can search the internet for serviceable solutions that require only a little modification of the source code, which they are of course free to do. Professional developers can team up with developers in other businesses, libraries, science labs, etc., to share the burden of creating applications of mutual benefit, and their efforts only have to start where the ever expanding repositories of open source leave off. Their contribution to those repositories will make some one else’s development efforts easier in the future.
The proprietary software vendors, on the other hand, can not as effectively leverage this vast savings in development effort. They can take advantage of open code written by others, but as their motive is proprietary differentiation, they do not share the part of their code that is unique, and therefore can not benefit from the advice of the larger development community. It is, in effect, above criticism. Open Source software, on the other hand, is essentially peer-reviewed software. Any bug or security hole gets noticed and reported immediately in public forums, and workaround tips are quickly disseminated. By contrast, vendors have every incentive to stay mum, hope any bugs go unnoticed, and slip in the fix in the next release. Whereas commercial software brought much-needed improvements over the home-grown systems and manual office work of the past, in this day of ultra cheap storage and virtually free internet transport, the closely-guarded code of the proprietary model would be an obstacle to greater efficiencies and greater economies of scale, were it not for the fact that the community is free to share its own solutions, regardless of what commercial software companies want to keep hidden.
It must be mentioned that Open Source has not obsolesced commercial incentives in IT work. “Pioneer” enterprises have a marketable niche at least until an Open Source technologist manages to replicate or improve on the technology. OpenOffice, for example, offers word processing, spreadsheet, presentation and database software every bit as sophisticated in functionality as Microsoft’s, and can be used free of charge, while developers can extend their own functionality. When OpenOffice became the property of Oracle, the open source community responded with a virtual clone in LibreOffice. In other cases, vendors offer a robust open source version, available for free and continually improved through community development, but charge for an “Enterprise edition” with proprietary extensions and service guarantees; its core, however, is still based on the evolving work of the Open Source community.
And there are companies which charge a fee to host, administer, and even develop extensions to open source applications. Up until the university recently established its own cloud server infrastructure, the WMU library employed the London-based PTFS company to host its Koha library management system In that hosting arrangement, any development PTFS might do for hire on WMU’s behalf would be made available to the Koha community. By the same measure, any useful developments by other institutions were likewise available at no cost to WMU. Furthermore, PTSF’s client, i.e., the WMU librarians, had complete “root user access” to make their own developments. And lastly, when the library decided not to host any longer with PTFS, the company surrendered the Koha source code and the WMU data. This is now a typical arrangement between small institutions without the means for supporting Open Source software, and IT companies which can provide that support cheaply by not having to pass on proprietary license fees.While we predict growth in the commercial business of supporting Open Source development, we are not predicting the extinction of proprietary software itself. We are only pointing out that its economically viability is, or should be be, limited to areas where differentiation is an advantage. As the Google Apps Marketplace demonstrates, cloud-based hybrids that offer software as a service are finding the means to add their specialized additions to the great core of common code, but can do so more cheaply than they could develop software as a proprietary commodity. This should strike a chord with businesses that would rather pay only for the distinct functionality they need than for the bloatware of additional features which the retail developer includes to justify higher development costs.In terms of what software is worth paying for, and for how much, it is up to every organization to understand the economic implications within the context of its core business. Alas, there is no substitute for institutional diligence on matters regarding Information Technology. There is no comprehensive solution, nor will there likely ever be one, with proprietary code or with Open Source. The institutional variations are too great for any one vendor, any group of programmers, to make a one-size-fits-all application, particularly when the pace of technological change continues to accelerate. Thus, while comprehensiveness of functionality might seem critical in evaluating software solutions, it is not in the long-term more important than conformance to data standards and exchange protocols. The ability to “play well with others” is a criteria institutions ignore at their peril. All the more important, then, to adopt applications which expose their source code to the review and scrutiny of the community.