Are you leveraging the power of Redis? – by John Hall Founder Loyhall.com, Co-founder Tradesy.com

Redis is known as either a NoSQL database or a cache.    We all know that calling anything a “NoSQL” database doesn’t really tell us much about it, and using Redis as a cache is like using a Ferrari for a golf-cart… it can do a lot more than drive you from hole to hole.  So for those who aren’t familiar with Redis, I’m going to lay out what I think its best use cases are.

But first a note of caution:  I see a lot of tech articles that try to compare various NoSQL databases against each other.  They will benchmark Redis vs Mongo, ElasticSearch vs Splunk, etc.   Most of these comparisons are pointless and don’t tell you the whole story.  The problem is that almost all of these tools are great at what they do, but they are almost all made to do different things!  So I encourage you to learn what each type of NoSQL database is great at, and focus on only leveraging it’s strengths without trying to make it do something it was never made to do.

As a data tool, Redis lives in a very special place.  It excels as a read-heavy production data-source.  It has seen benchmarks of between 70k and 500k queries per second on a single server, depending on how you use it.

Redis is incredibly:

  • Stable
  • Fast
  • Light-weight
  • Simple to use

A great use-case example is for web pages and API calls to which an ID is passed, for example a user id, or a product id.  Pulling the associated product or user data (no matter how much) from ANY other source will take longer than Redis.

If you use a key value pair, with the key being the ID, and the value being a data structure of your choice (JSON for example), Redis can fetch it in a couple of milliseconds.   That’s a lot faster than other NoSQL databases and potentially hundreds of times faster than a SQL database like Oracle or MySQL.   But the key is to put all of the information you will need into the value of the key value pair, because unlike a SQL database or filtered search products like ElasticSearch and Solr, you can’t look for anything in Redis, you have to know where it is.  You have to know the key.

Another great use-case for Redis is for application data that doesn’t change very often, like category/sub-category data, and business logic constraints.  Basically, anything that you don’t want to hard-code but at the same time don’t want to load from the SQL database on each page load.

I’ve found that Redis alone will help your site or app scale by a huge degree.   As often as possible read your data from Redis!

A lot of companies are using Redis these days, but I’m still surprised that it’s not more.   Perhaps it’s because people don’t understand the most obvious use cases.  And now with the stable release of the Redis cluster, companies will be able to reach speed & scalability levels never seen before.  I really hope to see more companies take advantage of this great open-source database.

John Hall can be reached at www.linkedin.com/pub/john-hall

Node.js & the Event Loop – by Tim Fulmer, VP of Engineering at HopSkipDrive

Remember when object instantiation was such a big deal, EJBs made sense?  It cost so many CPU cycles to allocate a new object, it made sense to cache objects in a pool and swap state in and out of them. I once saw a system that couldn’t stand up under load because of Object.newInstance.

So, I’ve done a lot with Java.  Enough to have wandered into the depths of the JVM more than a few times.  And friends, I’m never going back.

RAM and virtualized CPUs, cheaply and readily available through your cloud provider of choice, have made the JVM as obsolete as EJB.  Node.JS is one of a new generation of single threaded execution environments helping to serve content.

Intelligent memory allocation strategies and horizontal CPU capacity allows for a stack of functions, each with everything needed right in memory, maximizing CPU cycles on as many CPUs as it takes.  Literally getting things done as fast as possible.

This runtime environment also accurately models what’s happening at the hardware level. Making an HTTP request in code queues up some state on the bus, the CPU pulls a bit to notify the network adapter to make the call, the network adapter reads the packet off the bus and sends it on its way. Later, the network adapter pulls a bit to notify the CPU, and a response is delivered back to the code.

Linux has been asynchronous for many years, which has been making its way up the stack.  Node.JS uses the V8 event loop to accurately model the underlying process, while taking away most concurrency concerns.

Of course, there are some trade-offs.  JavaScript is probably one of the more loosely typed languages in comparison.  There are many things that can be done with JavaScript, which really aren’t good ideas.  Automated code quality tools like JSHint/JSLint can help, and there are some good SaaS tools available as well.

It’s also really easy to make changes in a JavaScript system.  This can of course be a good thing, though can also lead to some very interesting application behavior.  At the code level, we’re used to anything being undefined at any time however when combined with a schema-less Mongo database, and much of the traditional, a strongly typed software development safety net will be missing.  At an overall process level, this makes automated unit and integration tests consisting of a comprehensive regression test suite an absolute must.

This isn’t necessarily a bad thing, add some continuous integration/continuous deployment into one of the auto-scaling cloud providers mentioned above and we’re starting to achieve a very high velocity tech environment.

Are there some things to look out for in a JavaScript system, absolutely.  At the same time, navigating these trade-offs can lead to a fast moving technology solution, scaling linearly and predictably with load. I’m certainly never going back 🙂

Tim Fulmer can be reached at https://www.linkedin.com/in/timfulmer

Develop an App Using Native Code or Use a Hybrid/Web Framework – by Shuki Lehavi, Sr. Director of Engineering at J2

After PhoneGap and Titanium/Appcelerator, Ionic /Drifty is the latest to offer a Hybrid framework for developing cross-platform apps. I thought this is a great opportunity to revisit the fundamental question of “what is the best way to build your app? use Native code or a Hybrid framework?”, and see if the recent evolution of these Hybrid frameworks my change your decision.

Hybrid frameworks promise to solve two problems: 1) allowing Web developers with HTML/JavaScript experience to build apps; and 2) allowing developers to code the app once, and deploy it to many platforms, such as iOS and Android.

In the early years of Android and iOS development, building an app was a pain. Android apps were easy to code using Java, but creating an appealing user experience took a great deal of effort. In contrast, iOS allowed you to create a beautiful user experience, but the coding language (Objective C) was cumbersome and illegible. Hybrid framework sounded like the right idea at that time.

But after I coded my first commercial app using a Hybrid framework, I noticed the following issues:

1) Hybrid was great for ‘Hello World’ apps, but not for full-featured apps: Using the Hybrid framework, we got our first 10 screens up and running in no time. But the problem started when we wanted to do complex tasks such as running threads in the background to load data, gracefully recover from network disconnects, control data caches and more. The app started crashing and we soon realized that no Hybrid framework is as efficient as managing threads and tasks like Native code.

2) There are not enough good Developers (and resources) to support Hybrid framework: When something goes bad with a Native code, there are many developers and online resources you can call. Search oDesk.com for “iOS” and you get 7,731 developers or “Android” and you get 13,770 developers. But if you search for “Titanium” you get 561 developers, and “Ionic” returns 126 developers, 9 of which are in the US.

A search on stackoverflow.com shows similar results. “Objective C” returns 112,000 questions answered, “Android” shows 649,000 questions answered. But a search for “Titanium” returns 11,000 results (most are old) and “Ionic” returns 6,000.

I specifically selected stackoverflow.com and oDesk.com as I regard them as independent sources and when I looked at the websites of some of the Hybrid platforms providers I have noticed very creative numbers. One platform provider claimed “Make your code part of other great mobile apps by publishing them in the Marketplace and sell to the 1.5 million developers”, which is troublesome.

So when your project goes south, and the deadline is approaching, you will be hard pressed to find the experts or resource that can help you. Now, most of these Hybrid platform providers have a Professional Services group, but now we are talking very high hourly rates.

3) Hybrid frameworks aren’t easy to debug: Most Hybrid framework wrap around your HTML/JavaScript code and this code runs inside the Hybrid container. When your app starts having performance or stability issues such as memory leaks or thread locking – debugging becomes a nightmare. Identifying if it’s your app that causing the problem, or the Hybrid container (or the specific way you wrote your app to conform to the Hybrid container) will cost you a lot of time, and money.

For years it seems like Hybrid frameworks were the lesser of all evils.

And then came Swift and Android Studio.

With Swift, Apple made it incredibly simple to build stunning iOS apps. Swift’s syntax is very similar to JavaScript or Java, add the latest improvements of XCode and the extensive online tutorials and you have a winning combination for iPhone development.

With Android Studio, Google finally delivered a fantastic development tool that makes it easy to build user interface and test it on many devices. Add the superb code editing functions of Intelij IDEA and Android development is now also easy.

But what if my app is just a combination of HTML pages altogether? First, you should know that section 2.12 in the App Store Review Guidelines states “Apps that are not very useful, unique, are simply web sites bundled as Apps, or do not provide any lasting entertainment value may be rejected”. Next, I would still suggest that building a simple app with one WebView and your web pages is a better solution than being tied to a Hybrid platform and it’s complexity and release cycles.

But what about cross-device development? one code that runs magically on all devices? Well, depends on how extensive is your app. You see, even the best Hybrid framework, have certain features that may work or not based on the platform. So unless you are building a VERY simple app, you are most likely designing and coding for both iOS and Android.

This was only my experience, to see what others are doing, let’s take a look at the industry. As I looked at the “Showcase” section of these Hybrid platform providers, and searched the app store for reviews of these apps, it became clear that these apps are not well accepted by the users.

Conclusion: as long as there are developers and multiple platforms, there will be developer tools and Hybrid platform. But with the app economy explosion, both Android and iOS are keeping their promise of providing superb tools for developers and are quickly surpassing the Hybrid platforms in today’s market. Will that always be the case? will someone develop a Hybrid platform that is better than Native code? I will let you be the judge of that.

You Should Never Build Native! – by John O’Connor

John O’Connor is an Entrepreneur, Engineer, Educator, Serial CTO, Startup Junkie, and Tech Nerd with social polish. He currently serves as CTO at CardBlanc. In this issue of Tech Splash he shares his insight into the new REACT-native framework and why you should never build native:

Building your application for a platform purely in its native language and API has always been a risky endeavor.

It’s not hard to imagine the horrors of having the core of your entire business run on software that would only work in Windows 95 because it relied on some long-scrapped Windows API (I use this example because I’ve actually seen that happen – and they’re still running it 20 years later).

At it’s core, it is a form of vendor lock-in – one that many platform providers are all-to-happy to encourage.   In the desktop world we’ve already made the move; from .NET’s Common Language Runtime (CLR) to Java’s JVM, savvy programmers understand that portability is important when it comes to ensuring a software product’s survivability.

Except, it seems, when it comes to mobile app development.

I’ve worked with countless CTO’s and VP’s that would argue, to the death, that building native for mobile is better than using a CLR or web-based technology.  It seems the lessons of the past two decades have not been learned.  Or perhaps they’re aiming for a job-security angle.  Either way, it’s silly to think that the same trend of ‘agnosticising’ won’t eventually make it’s move into mobile.  The question is, how and when?

I’ve been a fan of using web technologies for building native apps for a long time.  Web technologies were already designed to solve the problem of cross-platform compatibility, and as WebComponents come ever-closer to reality, the abstraction that we need to build component-based native apps is fast becoming a standard for the web.  I thought WebOS was a brilliant turn and Palm (and later HP) missed the boat by scrapping this obviously useful paradigm.

Last week, building native mobile apps got a lot more interesting.

REACT, Facebook’s very fast open source component-building system for Javascript, just got a major upgrade with React-Native.   In addition to playing nicely with other front-end architectural systems (Backbone, Ember, Angular), developers can now use REACT to build native mobile applications.

Now before you grab your pitchforks and chant “AppCelerator – Xamarin – PhoneGap”, there’s a major difference that makes React-native “not-just-another-web-app-wrapper”.  REACT-native is already being used – on probably the most widely-proliferated app in existence: Facebook’s Mobile App [1].  Gone are the days where Facebook eschews HTML5 and regrets even using it [2].

And REACT itself is not just a web-based technology.  It’s a component-building system that works with ANY imperative view technology (for example, UIKit or Android’s View SDK).  Using a “Virtual DOM” and over-loadable side-effect-free rendering functions means that react is not ON the web, but merely OF the web.  Writing components in an abstract way and allowing the underlying rendering technology to change is how we’ve built portability every time (from C++ compilers to Virtual Machines) and it seems like React has finally brought some semblance of this to the mobile world in a way that has already been proven on a large-scale.
[1] https://code.facebook.com/posts/1014532261909640/react-native-bringing-modern-web-techniques-to-mobile/

[2] http://venturebeat.com/2012/09/11/facebooks-zuckerberg-the-biggest-mistake-weve-made-as-a-company-is-betting-on-html5-over-native/

What’s In Store For 2015?!

Happy New Year! 2014 was an incredible year for technology employment in LA. It is most certainly an employee’s market, characterized by a shortage in technical, UEX and product talent locally and across the country. Most talented job prospects are receiving multiple offers at any given time, so networking and targeting passive candidates have never been more crucial factors in team building. In order to win out on hiring the top candidates, companies are offering increasingly competitive combinations of salary, bonus, equity, flex hours, telecommuting, benefits, growth opportunities, and, most importantly, the chance to work on meaningful projects with new technologies. Talent poaching and counter offers have risen in tandem with these trends. Additionally, we have seen a marked increase in a variety of development languages, including Scala, Ruby, Python, Go and node.JS (which has become one of the most popular and widely used languages).

Our data shows that average base salary ranges (not including benefits, perks, bonuses, stock/equity or other factors that might influence total compensation) for the most common positions are:

  • Manual QA:  $60K – $90K (all levels)
  • QA Automation/SDET:  $110K – $125K (mid) $125K – $150K (Sr)
  • .Net Engineer: $90K – $110K (mid), $110K – $135K (Sr)
  • Java Engineer: $110K – $125K (mid), $125K – $145K (Sr)
  • DevOps/Linux Engineer: $110K – $125K (mid), $130K – $150K (Sr)
  • Node.JS Engineer: $100K – $120K (mid), $120K – $140K (Sr)
  • PHP Dev: $90K – $110K (mid), $110K – $135K (Sr)
  • FE/Web Dev: $100K – $120K (mid), $125K – $150K (Sr)
  • Product Manager: $90K – $120K (mid), $120 -150K (Sr)
  • UI Designer: $80K – $110K (mid), $110K – $135K (Sr)
  • Project Manager: $90K – $110K (mid), $110K – $130K (Sr)
  • iOS Developer: $110K – $125K (mid), $125K – $165K (Sr)
  • Android Developer – $120K – $175K (all levels)

As we look forward to 2015, we anticipate that the skill-sets most in demand will include:

  • node.JS Developers
  • Data Scientists/Engineers
  • SDET’s
  • DevOps Engineers
  • Cloud Infrastructure Engineers
  • Product Marketing
  • Android/iOS Developers

We also expect that traditional Systems Administration (Windows and Linux), Network Engineers, manual QA and PHP will continue to fall in demand. Particularly for QA or systems, positions that do not require scripting or heavy amounts of automation experience are increasingly rare.

We’re looking forward to a fantastic year, full of great projects, new technologies and fruitful collaboration.
Onwards & Upwards!

Going Down The Rabbit Hole? By Jon Dokulil

Jon Dokulil, VP of Engineering at Hudl, a company that makes software products for athletic teams and coaches, shares his experience of transitioning from MSMQ to RabbitMQ. If you’re interested in more of the nitty-gritty code details, there’s a jump link to a longer form article below.

 

Fun Fact: 2014 “Cyber Monday” sales were nearly 10% higher ​​​
​than those from 2013

Going Down The Rabbit Hole? By Jon Dokulil, VP of Engineering at Hudl

Even though we use C# in our application servers, Hudl moved from MSMQ to RabbitMQ several years ago and we’ve been pleased with the result. RabbitMQ is stable, very fast, and provides many options for message routing and durability. Plus the C# driver is solid and easy to work with. At Hudl we offload long-running operations and many of our interactions with third-party systems like Facebook, CRM, credit card processors, etc.
​ ​
Some lessons learned using RabbitMQ​:​

  • Everything we queue needs to be durable (though RabbitMQ does provide options if you don’t require durability). In addition to durable queues, setting Delivery-Mode=2 means calls to enqueue a message don’t return in our producers until RabbitMQ has written the message to disk. This ensures a crash of our RabbitMQ node will never result in data loss.
  • Explicitly acknowledge messages only after processing completes to ensure at-least-once delivery of messages. That way if a consumer crashes during message processing, RabbitMQ will deliver that message to another consumer​
  • If you are running in the cloud, ensure the RabbitMQ queue files reside on durable storage. We use AWS and store the queue files on PIOPs EBS volumes. PIOPs are critical if you expect high message volume. GP2 volumes could work if your message workloads come in sporadic bursts.
  • Tweak your Prefetch count to tune performance. Prefetch tells the consumers how many messages to grab at a time. Too low and you’ll slow down waiting for network requests. A good starting point is 20-50.
  • RabbitMQ uses persistent TCP connections. It’s best to have a single connection per server. The RabbitMQ client takes care of multiplexing, so one TCP connection can consume and produce messages across any number of queues.
  • We’ve invested a lot of time into our RabbitMQ code base. If we were starting today we’d use EasyNetQ, a wrapper around the RabbitMQ client that greatly simplifies the most common usage patterns.

RabbitMQ and C# work well together. We’ve been using it now for three years in production and are still happy with the choice. It’s easy to get up and running with just the basic functionality but is also flexible if/when you want to use more complex routing scenarios. RabbitMQ is also very fast, we regularly push over 1,500 messages/sec and Rabbit barely breaks a sweat. If you want to move some logic off of your web servers then it’s a great choice to do so in a durable and performant way.

For more details and code samples, check out the full article here
​ ( http://public.hudl.com/bits/archives/2013/11/11/c-rabbitmq-happy-servers/ )

 

Being “Cloud Agnostic” Could be Costly by Kevin Epstein

Kevin Epstein is a cloud computing expert, currently working as the Cloud Computing Manager at CorpInfo. He shares insight into the costs associated with being “cloud agnostic”, including the technological complexities that are involved.

Fun Fact: Over $150 billion is expected to be spent on cloud computing in 2014.

Being “Cloud Agnostic” Could be Costly, by Kevin Epstein

“We’d like to be cloud agnostic.” That’s a phrase I’m starting to hear more and more often. At face value, it sounds like it would give you the freedom to move between cloud platforms with little or no trouble. Being cloud agnostic may in fact be a good strategy, but that depends heavily on your use case. Too often this strategy means you will leave too much on the table.

All IaaS providers offer virtual machines as a basic service, but those providers also offer other PaaS and SaaS services that complement their basic VM offering. One example is AWS’s Elastic Load Balancer (ELB), which is a fully managed, scalable, and highly available load balancing service. Other cloud providers have offerings similar in function but different in implementation and capability. The difference in implementation and capabilities from one provider to another is what we often refer to as “vendor lock-in.”

Let’s continue with the load balancer example. If you’re in the AWS cloud, the service offered is ELB. If you choose not to use ELB, you need to “roll your own” by spinning up EC2 instances, configuring the base OS, and configuring your load balancer software of choice.

cloud1

The diagram on the left is a typical representation of how ELB is depicted in architectural diagrams. We see the load balancer and the multiple instances (in this case, web servers) associated with the load balancer. The diagram is a little deceiving, because what’s happening is that AWS maintains an EC2 instance running their own “secret sauce” load balancing software in each availability zone (AZ) in which you have worker instances, as shown in the diagram on the right. The diagram also starts to reveal some of the inner complexity that AWS is managing.

To replicate the ELB implementation in a cloud agnostic manner, you must deploy your own load balancing solution on EC2 instances. If you later choose to move to a different cloud provider, you simply redeploy your load balancing software onto VMs in the new provider’s cloud. In this case, let’s assume HAProxy is the load balancing software of choice. Architecturally the deployment is very similar to ELB, but all the complexity of managing the relationship between the HAProxy instances and the web server instances now falls to you. In other words, it is up to you to manage scaling in and out as your load changes by reconfiguring all the HAProxy instances. If the load becomes so great that you need additional HAProxy instances, you must manage DNS updates to route traffic between all of the HAProxy instances. This complexity has an associated cost above the cost of resources consumed.

cloud2

The cost of a single ELB is $18.30 per month (as of the date of this article). This does not include traffic handled by the ELB, because there are data transfer costs regardless of whether you use ELB or your own solution. In the HAProxy solution shown in the above diagram, three m1.small instances were deployed. Currently the cost of a single on-demand m1.small instance is $32.21, so the cost to replicate the ELB service is $96.63. That’s roughly five times more expensive than deploying ELB, excluding the costs associated with developing automation and orchestration to dynamically update HAProxy as your environment changes.

Similar comparisons could be drawn for other SaaS offerings, such as implementing DynamoDB compared with deploying and managing MongoDB, or selecting to use Simple Queueing Service (SQS) over deploying and managing RabbitMQ.

Another aspect to consider is that your data has to live somewhere. If you’re going to be truly cloud agnostic, then you’ll have to store your data on your provider’s block storage offering. As before, this will preclude you from being able to leverage SaaS offerings from your cloud provider, and when it comes to moving to another cloud, you still have the task of data migration, which at scale will pose challenges of its own.

As mentioned at the beginning of this article, there may be scenarios where choosing not to use the services offered by your cloud provider may make sense. For example, if you’re a software vendor that has no control over the environment or platform that your customers will deploy your software on, being cloud agnostic is the way to go. Otherwise, as illustrated in this article, being cloud agnostic is almost certainly a more costly and complicated endeavor.

In conclusion, being cloud agnostic could be extremely costly. I recommend you do your due diligence to evaluate whether the benefits of taking the cloud agnostic approach outweighs the costs. My personal experience has been that embracing all the services your cloud provider offers is a far better alternative.

Kevin Epstein can be reached at www.linkedin.com/in/kevinepstein

“When Exactly IS The Right Time To Innovate” by Josh Hatter

Josh Hatter is a technologist and former broadcast and digital operations executive. His experience includes deploying high performance technology infrastructure, designed asset management tools and defined worfklows used in the production of online, broadcast and cinematic content at media companies like TMZ and Revolt TV.  He has overseen Engineering, IT and Systems Administration teams supporting an array of business verticals.  Josh currently provides consulting services throughout the country in addition to advising and mentoring startups.
Fun Fact: Single Google query uses 1,000 computers in 0.2 seconds to retrieve an answer.

When Exactly IS The Right Time To Innovate – by Josh Hatter

Designing and building out greenfield technical operations facilities can be a daunting task.  Creating a technology budget, fighting to retain that budget, and utilizing the inevitable value engineering process to trim down to the absolute essentials is what I usually wind up going through.  Something else to consider is the increasingly fast pace at which technology evolves, resulting in the original design becoming obsolete some time between week one and week thirty six of the job.  Technological evolution can be beneficial by forcing me to innovate in areas that I might not consider.

I have a few criteria I evaluate when considering a non-traditional, bleeding edge or innovative solution to a problem:

  • Does the solution save money?  I mean real money, and not by cutting corners or an unstable environment resulting in an increase in downtime or a lot of overtime support hours.
  • Does the solution save significant time?   Does it save enough time to be worth the risk of being an early (or only) adopter, or maybe using consumer products in an enterprise environment?
  • Does the innovative solution solve a problem unique to my company?
  • Does the solution increase operational flexibility?  If there’s one thing that drives me nuts, it’s throwing money at a product that can only do one very specific function and cannot be used in any other way.  There’s nothing worse than having a storage room full of once-expensive hardware gathering dust.

When building out Revolt TV, there were a variety of challenges that most startups have. One of the bigger and potentially costly problems that needed to be solved was how to interconnect two buildings full of employees and core services located half a block from each other. Early in the design process, I spoke to multiple networking vendors and ISPs about the challenge of connecting two buildings with very high data bandwidth capacity, a dozen baseband video circuits, two dozen audio channels, VoIP, internet services and support for broadcast communication hardware. The bids I received were staggering, some running almost half a million dollars annually to accomplish what I was looking for.

I kept looking. Every vendor I spoke with got to hear about my challenge of connecting the buildings. Some really smart people made some suggestions. We could use line of site microwave, but would be limited to a 1gig pipe per pair of dishes. We could put all production staff in one building for high bandwidth connectivity to core services, and use MPLS or VPN connections for the rest of the staff to access business resources by having internet connectivity at each location.  We could cut back on our operational functionality and requirements. I didn’t hear any viable solution I could take back to the executive team, so I kept networking and seeing what other people were doing.

One day I had lunch with two serious heads; one of the many brilliant engineers I have met over the years, as well as the storage and networking vendor I was using on the project.  The engineer suggested we look at CWDM technology. CWDM, or Course Wavelength-Division Multiplexing, creates dedicated wavelengths, or colors of the light spectrum, to be used with specific services over fiber optic cable.  As my networking vendor and I dug in further, we realized that this was a very economical solution that required less than ten thousand dollars worth of hardware in each building.  A dedicated wavelength per service allowed us to accomplish every single requirement listed above, with room for expansion!  The last hurdle was to get a dedicated point to point, dark fiber circuit between the two buildings.  This was accomplished via a one time commissioning fee for splicing fiber across and up the street, and running the circuit into our spaces in each building.

This innovative solution worked right out of the box.   The cost of hardware and commissioning was less than $50k.  In fact, it worked so well that we did the same thing down to 1 Wilshire, where all fiber and networking services are terminated in Hollywood.  The effect of this install resulted in delivering our secondary video signal to our uplink facility at a fraction of the recurring monthly cost of leasing video fiber from traditional carriers.  It also meant we had no “last mile” costs, and could potentially link directly to any vendor located at 1 Wilshire with a patch cable.

When done right, innovation can decrease OpEx, increase productivity, provide maximum flexibility and growth potential, and make you look like a rock star.  Just make sure that a novel approach to a challenge is being done for good reason, or your team might be spending some long days and nights supporting flaky systems in the name of innovation.

Josh Hatter
www.linkedin.com/pub/josh-hatter/0/177/995

“The Cost of Interruptions” by Eric Wilson

Fun Fact: According to recent studies, interrupting your work to check your email can waste as much as 16 minutes.

On many occasions I have found myself having to explain to those outside of the software engineering world why unplanned interruptions are so, well, disruptive. I have tried to describe the mode of being in the zone, so completely deep in the understanding and comprehension of a task that a phone call, a question or just the need to say ‘hello’ to an engineer in the zone is like pulling out the wrong block during an intense game of Jenga – everything falls down.

To be crystal clear – it is an extremely fragile period of enlightenment.

Much to my delight, Chris Parnin (@chrisparnin) over at ninlabs research did a nice writeup of the effects of interruptions on productivity and focus – accompanied with the requisite scientific rigor. From his post: Based on a analysis of 10,000 programming sessions recorded from 86 programmers using Eclipse and Visual Studio and a survey of 414 programmers (Parnin:10), we found:

  • A programmer takes between 10-15 minutes to start editing code after resuming work from an interruption.
  • When interrupted during an edit of a method, only 10% of times did a programmer resume work in less than a minute.
  • A programmer is likely to get just one uninterrupted 2-hour session in a day.

Brutal. When is the worst time to interrupt an engineer? Research shows that the worst time to interrupt anyone is when they have the highest memory load. Using neural correlates for memory load, such as pupillometry, studies have shown that interruptions during peak loads cause the biggest disruption

I call it ‘being in the zone’ – Chris calls it ‘highest memory load’.

This real cost in lost productivity is a notion I’ve been describing for so many years. I’m glad that it has somewhat been quantified.

Fascinating stuff and a great read. I highly recommend it to those who find engineers to be the grumpy sort. It may just change your opinion.

Eric Wilson,
VP/Head of Product and Technology @ ScoreBig
http://ericwilson.erics.ws/
@ericwilsonsaid

“Salesforce Pivot, from Saas to PaaS” by David Glettner

Since its inception, Salesforce.com has become a widely used and powerful tool. We asked David Glettner to share his insight into how the platform fits in with other CRM and sales tools as well as provide a perspective into his experience of managing a large scale implementation.

Salesforce Pivot, from Saas to PaaS By David Glettner

Salesforce.com is not just for sales team automation anymore. In 15 years it has gone from a simple contact-tracking tool to a full featured platform that entire businesses operate on. Through planning and open communication this platform has the power and functionality to fuel and empower an organization’s growth.

It started out as a way to track leads and accounts of people that were interested in purchasing goods and services. There is almost nothing that can’t be done on this PaaS offering from a pure relational database offering, to a full sales and service platform, coupled with an extremely active development community, you can do even more. From invoice and payment collection, to full customer and partner portals have been constructed and are easily deployed from third party developers.  Within the past few years, organizations have been flocking to this platform to do much more, not only because of the flexibility, but also because of the scalability, extensibility and ease of deployment.

A successful implementation of the Salesforce.com technology relies heavily on a strong understanding of the business goals along with strong collaboration between stakeholders, and should include the following key strategies:

  1. Clearly identify business goals and stakeholders
  2. Documents existing systems and processes
  3. Clearly communicate the solution that is to be implemented, including the mapping of existing process to new process if there is a deviation
  4. Training of how to use the system from the vantage point of the main user types

While the dream is to have a single comprehensive system that houses all information, we find that a myriad of different specialized systems are brought together to accomplish business goals. Unfortunately, the systems usually look more like an alphabet soup, with a combination of multiple systems including CRM, ERP, APIs, etc. Over time, the overhead of different systems leads to an army of specialized consultants or a lot of key employees with indispensable institutional knowledge that are difficult to scale or replace.

So in this day and age when we are making great advances, what does this mean for us? It means that looking back to aspiration, there are many financial packages that allow for the tracking, sales, invoicing and payment collection all happening native within Salesforce.com. It means that while once the construction of a single system was almost a non-starter (due largely to the challenges presented in assembling a team with such a broad range of functional and operational skills), the SalesForce.com platform along with some of their native applications make the customization of a single system viable.

In a recent project that I had engaged in, there was an organization that was operating off of a combination of Salesforce.com, Seibel and PeopleSoft, as part of application sprawl that included 17 custom built applications for data transformation, integration and reporting. After a review and documentation of these systems and their business functionality, we were able to define a solution that allowed for a phased migration of the systems to the Salesforce.com platform, thereby reducing the costly consulting, NetOps, DevOps, hardware and associated overhead expenses.

So as you begin to review your strategic planning initiatives and current budgeting needs, you may want to re-evaluate your core systems and how you can leverage a cloud platform like Salesforce.com to further empower the business stakeholders and provide a smoother TechOps operation, consider your platform options.

David Glettner
Head of Enterprise Salesforce Initiatives @ Internet Brands
www.linkedin.com/in/dglettner

Pin It on Pinterest