TheServerSide invests a good deal of ink covering emerging data architecture issues, but one topic that rarely gets broached is network capacity management.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The events that transpire as a piece of data travels across the network -- from when an application manipulates the data to the point in which data gets permanently persisted -- can be a serious bottleneck in production systems. In many applications, limitations of the Internet backbone present a difficult problem to address, and nowhere does this obstacle become more apparent than in the Internet of Things (IoT). Bandwidth limitations of IoT networks have the potential to immobilize connected devices.
In an effort to bridge the gap in terms of our coverage of latency issues and to find strategies that software developers and enterprise architects can use to minimize the effect of network-related issues, TheServerSide talked to Sean Bowen and Peter Hughes of Push Technology Ltd., a company in London that specializes in network capacity management, quality of service, speed and scalability. The interview is available to listen to as a podcast below.
New approaches to solving an age-old problem
For software developers to avoid potential latency issues in the applications they design, Hughes' advice is to stop sending so much data across the network. In other words, put on the hat of a data architect, question which pieces of data should go across the network at runtime, and determine which pieces provide little to no value.
Interestingly enough, while the podcast discussion was originally intended to be about data quality and network latency issues, the discussion soon turned towards RESTful Web services and how the industry is finding that in high frequency, data-heavy environments, a variety of streaming and stateful Web service design patterns are usurping REST as the preferred development framework. The podcast is worth listening to just for Hughes and Bowen's take on how big data problems are changing the design of enterprise applications.
Some of the questions we pose to Bowen and Hughes during the interview include:
- How can organizations minimize bandwidth problems and reduce latency issues?
- When does an organization know they are about to encounter latency and bandwidth issues?
- What would be a real-world example of an IoT device hitting the wall in terms of network latency and bandwidth capacity?
- How have high-volume, high-bandwidth applications affected RESTful Web service development?
To hear about network management strategies forward-thinking organizations use to improve application performance, listen to the podcast.
What types of unpredicted network capacity management problems have your applications encountered? Let us know.
Customizable sensors created by IoT startup
IoT enabled with embedded technologies