A lot happens before ideas become solutions.
At ElixWare we want to bring you more than just great, affordable software. We want to let you know how and why we do what we do.
Our Ruminations blog will bring you insights into how we got here and some of the things we consider when trying to help you run your business. We hope it gives you a better understanding of how we strive to better serve your needs.
These posts will get a little (or maybe a lot) technical. If you start hearing the adults from Peanuts while you read this, that's not uncommon. Just take a deep breath, quote your favorite Stuart Smalley affirmation, and take things one paragraph at a time.
This post will cover a performance-related concept that also touches on usability and budgets.
Dealing with programming at this level of detail is necessary to develop highly efficient code. Efficient code increases speed and reduces hardware requirements. This keeps costs lower for us, which keeps costs lower for our clients. Faster response times directly affects usability, new system acceptance and performance momentum.
Hardware performance has increased exponentially over the last two decades. This, along with decreasing price-to-performance costs, has resulted in a lot software bloat. The expectation that the hardware will make up for poor system or software design has more than planted stakes in the software industry. It has set up shop and invited all its friends. Low standards produce slow, inefficient software.
One approach to increasing speed is caching. Caching works by using content the system has already prepared rather than recreating it each time it is requested. This can have a very positive impact on performance, depending on how it is implemented. Caching is generally an improvement over dynamically creating each page from scores of code files and hundreds of function calls and database requests. But not all content caching implementations are remarkably efficient as they still rely on executing quite a bit of code and accessing the database to serve their content.
This blog post, and all the others on this site, are the product of "publish once". They merge two pieces of static content automagically via our CIM.
We use two approaches to creating "publish once" content. One is to build the static content at the time the data or content changes. Examples of this are menus and navigation, the content of data-based dropdown menus, data and/or markup used in dashboards, lists of content (such as the letters available in a letter writing system, or blog posts on a website), and many others. Rebuilding these at the time their source data changes ensures everything is up-to-date and efficient.
The other approach is implemented by our CIM. The CIM runtime checks for valid "fresh" content very early during execution and serves up the cached content using minimal resources. This, combined with late loading of page data, allows us to serve up content quickly, and often without ever opening the database (which also has security advantages). By using the server's own caching (built into the operating system) we gain even more speed resulting in faster response times. You click, and it just works.
It's our job to anticipate your needs, and meet them efficiently, in order to serve you better. Publish once is just way we strive to achieve that goal.
prev post: Bitpourri - Pay Attention or Pay The Price
We are a small team with a lot of experience, specializing in software development, design, and workflow, process & project management. We are fiercely dedicated to helping you do what you do best — run your business — without getting bogged down in the mountains of paper and hours of screen time required to do it. We are equally dedicated to protecting your privacy and your data. Learn more about our privacy policy.