Cisco’s Umbrella – Another Effective Layer of Security

umbUp and running for only 20 days, Cisco Umbrella has protected us from 358 potential security issues. Diving deeper into the actual events show that many of these events are potentially dangerous sites, but better safe than sorry.

The most compelling aspect of this product is that it works in the cloud, before the data even gets to you. Most Web Filtering security solutions work at the perimeter level, meaning the data gets to your firewall and then it is blocked. Umbrella, does this at the DNS layer in the cloud, ultimately cutting down on your own bandwidth usage and providing security. two birds with one stone.

While I wouldn’t suggest you go and throw out any of your other security solutions, Umbrella can be a great add on to your overall security strategy.

Feel free to contact me to discuss further.

Being All Things to All People

Having being in the Managed Services game for about 25 years, as you can imagine we have seen the industry change drastically. When End to end started 23 years ago there weren’t too many Managed Service Providers out there. Today everyone is doing it, but what I have noticed is that many are trying to be all things to all people. In our infancy as an organization we were very opportunistic and took any business that came our way. As we matured we have narrowed our focus so that we can be the best at what we do.

We are not a Server, Storage, Anti Virus, Windows or Print Managed Service provider. We are strictly a Voice, Data and Wireless Managed Service Provider. We focus on core infrastructure technologies and ensure our staff have the latest certifications to be the best in those areas.

End to End is a member of the Trust X Alliance, a group of trusted partners that have come together to deliver best in class solution for our customers. Each member has their area’s of expertise allowing each of the members to focus on what they do best.

When I look at an organizations web site and they claim to do everything from Web Development to Cabling, Security, Wireless, Data Centre, Cloud Services, Managed Services, Managed Security Services, Managed Print Service etc. it makes me wonder how good they are at each of these technologies. Even within the Data Centre, there are L2 and L3 Switching technologies, Hyper-V, VM Ware, Fibre Channel, Fibre Channel over Ethernet, Linux, Windows, Storage, Firewalls, Routing etc. after

And even within common technologies there are multiple vendors. Can you be a Firewall expert in Cisco, Juniper, Fortinet, Palo Alto, Checkpoint, Sonic Wall and Watchguard. I think not. You may be able to understand them all but you can only be an expert in two or three…

Now I know that there are large organizations out there that can and do deliver all of these services. But do they really act like one company. Does the Data Centre guy interface with the Firewall guy within that organization or are they in separate cities and have never met each other?

If you have a need for anything, you can call me up and if it is my core, I’d be happy to deliver it. If it is not, I’d be happy to give a referral to our of our Trust X Alliance partners, and if I can deliver some, but not all, I will be completely upfront that we would be bringing in a trusted partner to help deliver the services I can’t.

This allows End to End to deliver high quality Managed Services in Voice, Data and Wireless technologies, while leaving the other technologies to companies that focus on what they do best. We win, our partner wins and most importantly our customers win.


Monitoring Wireless Capacity

In my last post, I talked about wireless network challenges, what to look for and how to plan properly for a deployment. I talked about planning for capacity to ensure you don’t go over a certain number of users per AP.

So, the next challenge becomes, how do I ensure that as I grow I don’t begin to exceed the optimal number of users per AP.

This is where advanced network monitoring can help mitigate issues before they become a problem. In the past a network monitor would poll or ping an access point to ensure it is available on the network. Although this is helpful it does nothing to monitor capacity.

Capacity planning is critical to any network management system. Bandwidth, CPU and Memory needs to be monitored on all your network devices. Each configured with a baseline that will alert you when that baseline is exceeded.

Recently we added some new capabilities to our Network Management System to cover Wireless Capacity Monitoring. Our monitors allow me to set the number of associated users threshold to the number of my choosing, either per AP, per Controller or any combination thereof. If the threshold is reached I can either send an email, log, open a ticket in our system, run a WEB service to another system, run a SPROC or do any combination of the above.

For our customers this will ensure a positive wireless experience. For us, it will help cut down on calls to our NOC regarding wireless performance issues, because we will be dealing with them before they become an issue.

This kind of monitoring is critical in less static environments like boardrooms, public areas with guest access and retail environments. In static environments where you know the number of users it may be less critical, but as users may move around, change their daily patterns, or over time you hire more staff, these changes can overload one AP, affecting the user experience and possibly productivity.

Setting all of this monitoring up may be time consuming in the short term, but can save you hours and hours of troubleshooting in the future.

Wireless Networking Challenges

Not too many people are plugging their laptop into an ethernet cable anymore. In fact, just about everyone in our office relies on wireless for their connectivity. In the past, wireless was too slow and somewhat unreliable, but it has come a long way and the convenience of not having to plug in far outweighs the performance impact if any.

Coverage is obviously one of the key elements for a good wireless deployment. It needs to work in your office, in the boardroom, in the lunch room and maybe even at the picnic table just outside your building. Ideally it should work anywhere your phone, tablet or laptop goes.

What gets missed quite often is planning for capacity. Coverage ensures there is a signal, but each access point can only service so many clients before it becomes slow, unresponsive and ultimately useless. It is also important to understand the applications that will be used over the wireless to get an idea of how many users per AP is ideal.

Some vendors make a recommendation of 20-25 users per AP. This is probably a good number if they are web browsing and checking email, anything more and I would suggest you will run into problems. In some cases, where large files are being saved to servers on a regular basis it is advisable to stick with ethernet. Overall however, I would suggest that you don’t want anymore than between 10-16 users per AP.

Interfering APs may also have an impact on your deployment. In some cases I have seen an AP detect up to 59 neighboring APs. This can cause havoc with your deployment. Site surveys prior to your deployment can certainly help mitigate this, but remember that a site survey is done at a point in time. If there is a new office building going up next door, you can expect more interference in the near future. Site surveys are good for determining the most effective placement of your APs and some tools will help you plan based on capacity as well.

When APs were standalone the deployments were much more complex than they are today with Controller based APs. The controller centralizes the configurations and pushes them out the the APs. Since the controller has a holistic view of the entire network, it can instruct the APs to make channel adjustments without affecting its neighboring APs. One of my favorite features in a Controller based deployment is the ability to detect rogue on-wire APs and even block any clients from joining them. A rogue on-wire access point is a AP that has been installed on the LAN via ethernet, but is not part of the controller based system. When configured, the controller will sent out disconnect messages to any clients that attempt to join the rogue AP.

My only complaint with a controller based deployment is that the cost is much higher than a standalone deployment. The Controller based AP is the same cost as a Standalone AP, but the controller hardware and licensing is extra.

The list of environmental challenges that can affect your wireless deployment is endless. Elevators, Microwaves, Cordless Phones, Water, Steel, Concrete, Small Rocks, you name it. They can all have an effect.

And of course security. One of the most important aspects of a good wireless deployment is ensuring only you and your staff can use it. A good deployment will have LDAP or RADIUS integration. If security is top priority then you should consider coupling the LDAP or RADIUS with a second factor, using key fobs or software that provides OTP (one time passwords).

The same AP’s that access your corporate network can also provide guest access. When providing guest access you can make it difficult so that only people you authorize can use it, or you can make it simple and provide a splash page where guest users are asked to provide an email address or simply agree to the terms of usage.



NGFW and UTM, What is the difference?

Over the last week or so I have been researching and trying to find the difference between NGFW (Next Generation Firewall) and UTM (Unified Threat Management). I came across some great blogs that helped me cut through the marketing hype.

In this blog the Author makes some great points that essentially argue that there is no difference. As I read through the comments on the blog, it was not so clear, as many argued that there is a big difference.

When I looked up the definition of NGFW and UTM in Wikipedia to get a baseline as to where I would end up on this argument, it solidified in my mind that these are in fact the same thing.

Gartner states an NGFW should provide:

  • Non-disruptive in-line bump-in-the-wire configuration
  • Standard first-generation firewall capabilities, e.g., network-address translation (NAT), stateful protocol inspection (SPI) and virtual private networking (VPN), etc.
  • Integrated signature based IPS engine
  • Application awareness, full stack visibility and granular control
  • Capability to incorporate information from outside the firewall, e.g., directory-based policy, blacklists, white lists, etc.
  • Upgrade path to include future information feeds and security threats
  • SSL decryption to enable identifying undesirable encrypted applications



UTMs represent all-in-one security appliances that carry a variety of security capabilities including firewall, VPN,   gateway anti-virus, gateway anti-spam, intrusion prevention, content filtering, bandwidth management, application   control and centralized reporting as basic features. The UTM has a customized OS holding all the security features   at one place, which can lead to better integration and throughput than a collection of disparate devices.


Now there may be some subtle differences here, but for the most part the two provide the same set of features. It seems to me that the main argument for the difference between the two is that the NGFW is a more robust engine and that it won’t suffer the performance impacts that a UTM would.

This becomes even more confusing when we look at the Gartner Magic Quadrant for the two.








Palo Alto seem to be the only NGFW (or at least of any significance) to not be in the UTM Category. And how is it that Fortinet is both a UTM and NGFW, but is not as good at being a NGFW.

If there is in fact a difference between the two, then one product cannot be both, can it?

My conclusion therefore is that they are the same.. Some may be better than others, but they are essentially equal in features.

Your Thoughts??



Data and Voice Convergence – oh ya, and now Video…

Before Voice and Data merged together there were two separate camps. The Telco guys and the Network guys. The two rarely talked as they had nothing in common and as a result nothing to say to each other. The Telco guys were usually an outsourced company that came in to your office, did there thing and then left. There was no need for ongoing support and the only time you saw them was when you needed an expansion to the phone system or there was a problem that needed rectifying. The Phone system of the past was made up of many different devices, usually from many different vendors. You had the PBX of course, the voice mail system,  the paging system, the music on hold system (usually a radio hooked up to some black box), and possibly a security system interface. These were all bolted to the wall on a big sheet of plywood, usually in a room that only maintenance had access to.

Today the Phone system resides with all the servers in the computer room is indistinguishable from any other server. In fact, it may even be a VM on a server that is also hosting more traditional Data apps. When data and voice started to merge, a battle ensued between companies that provided Telco Services and companies that provided Network services, each trying to solidify there place in the new converged market. From my perspective the Networking companies had an easier time adopting the new technology as Voice ended up being just another application on the Network. The Telco guys had to figure out the Networking part, which is the entire infrastructure of Routing, Security, Wireless, VPN’s, MPLS, QoS, VLANS, DHCP, TFTP and so on and so on. A much more daunting task to say the least. On the other hand, Networking guys had to figure out call flows (which is not entirely different than IP Routing), Scripting (for Auto Attendants), and some other technologies (PRI, DTMF, etc) that are not really a stretch for a Networking professional. Either way, some from both sides of the equation were successful and some were not.

I am starting to see this again, but this time with Video. Video Conferencing was always its own thing. And this thing required an expert, to do some expert things to make your video conferencing work. Now, I do not mean to take away from professionals that set up big boardrooms for optimal coverage of video and that best possible placement of Microphones and speakers to ensure the highest quality of audio, as that does require a certain level of expertise. The video conferencing equipment of the past was standalone hardware that did not integrate with your existing phone system and usually used its own private lines or network to ensure quality video streams. But today, this video equipment is now just another endpoint on your IP Telephony system. It has a phone number and extension number just like any other phone on the system. It can be tied to a user with Voice Mail just like any other phone. What level of expertise is required to ensure this device works correctly? Nothing new really,  just that Networking professional that has a deep understanding of VLAN’s, QoS, Routing and so on. So, as all of these Video experts come out of the woodwork, ready to provide you with the underlying network infrastructure and a phone system, just make sure they hired some Network and Voice professionals along the way.

A brief history of Firewalls and the current state of affairs

Back in about 1996 when the Internet was still young and many of us were scrambling to figure out what it was all about and how it worked, firewalls were big and expensive. Many customers  didn’t see the need to have one and would say things like “why would anyone want to attack my company”. Selling security is always a tough, as you are not selling something that will help grow a business or improve processes, you are selling piece of mind. In those days there weren’t that many security vendors to choose from. The big names were Check Point, Cisco and Novel. Oh, that’s right I almost forgot – does anybody remember SHIVA….

Check Point was clearly the front runner with their Firewall-1 product, but was also the most complex and expensive. Novel’s BorderWare was popular due to the large install base and popularity of NetWare and Cisco was the trailer with the Cisco PIX. I think the first model was the PIX 10000. A 4U Appliance with two 10 Meg interfaces complete with floppy drive for upgrades.

We stayed away from BorderWare, as we were already down the TCP/IP path and had distanced ourselves from Novel’s NetWare previously. Our first Check Point deployment was a nightmare. Running on top of WinNT required driver upgrades, registry changes and took forever to get working. I always felt like it was hanging from a shoe string and could blow up at any minute. The PIX on the other hand was almost too easy. Power it up, connect a console and enter 5 commands to get it working. Seriously – 5 commands.

Inside address
Outside address x.x.x.x
Global 1 x.x.x.x
NAT (inside) 1
Route x.x.x.x

This was enough to secure the inside and let users out to the Internet.

Other than our first nightmare, there were many other reasons we did not go with Check Point. There licencing was very confusing, there pricing was very high and at that time they were software only and did not have an appliance solution. Later Check Point partnered with Nokia to deliver an appliance, but that was even more of a licensing nightmare. additionally you had to manage the routing and interfaces via the Nokia engine and the Check Point  was a bolt on. So, we happily sold the PIX and became experts in the field of Firewalling and NAT.

Then came IPSec… Cisco were slow to respond and there first implementation (ver 5.0) in the PIX worked, but was not very secure. The tunnels terminated on the outside interface and you needed to create Conduits (that was the PIX’s term for ACL) into the internal network. The problem here was that These Conduit referenced the LAN IP’s at both the remote and local network. As a test I connected to out upstream router, created a loopback address that matched the remote LAN and then telnet’d from the loopback through the PIX into the local network.

Nortel came out with the Contivity Appliance for IPSec tunnels for both Site to Site and remote access. Clearly they were a market leader in this area. Cisco acquired Altiga and came out with the VPN 3000 Series Concentrator. Interestingly, the interface on the VPN3000 was very similar to the Nortel. Possibly they were both creations from the same technology. I can tell you that we still have a Contivity and VPN 3000 running in our network and they serve a purpose.

Cisco had also built Firewalling and IPSec capabilities into their routers. Cisco’s Firewall implementation was called CBAC (Context Based Access Control) and was relatively easy to configure and manage. I should point out that Access Control List, the basis for any firewall configuration had been around long before Firewalls. An ACL on it’s own can block traffic, but it cannot dynamically allow traffic in the return direction unless there is an ACL that already allows that traffic to return. The Firewalling component to the ACL is that it tracks the state of the connection and dynamically creates ACL entries for return traffic.

Many other vendors started to show up in the market and although I am not clear on the timing, Netscreen appeared in 1997 and quickly became a market leader in FW and VPN technologies, so much so that Juniper bought them for $4 Billion in 2004. We quickly jumped onto the Netscreen bandwagon as Cisco had started to fall behind in some keys areas for Firewalling and VPN’s. Netscreen’s were easy to configure, easy to manage, had both a CLI and GUI, and were more cost effective that the equivalent Cisco appliances.  Other vendors we started to see in the late 90’s were WatchGuard and SonicWall, and although we ran into them from time to time they were of little threat as they did not provide the features and functionality of the bigger players.

It is fascinating to me as to how small the industry really is, as these founders and leaders jump around from company to company reinventing the same product. For example.

One of the three Netscreen Founders left after only three years and founded Fortinet. Netscreen acquired One-secure who’s founder was a Check Point Engineer that later went on to create Palo Alto. This type of activity is common and it is no wonder there are so many competitors out there today.

For the first few years after Juniper acquired Netscreen, it was business as usual and we were enjoying designing and installing quality networks, for which Juniper/Netscreen were a big part. Juniper then made what I would consider a huge mistake. They determined that “One OS” built on the JUNOS platform was more important than enhancing the capabilities of the existing ScreenOS products. Certification requirements quickly changed and our organization was expected to drop everything and get all our techs up to speed on JUNOS. Before jumping in with both feet I had Juniper send me a couple of the new SRX Platforms running  JUNOS for testing. The SRX were suppose to be the replacement to the Netscreen FW’s. The SRX was not ready for prime time. There we features “not available yet”, we ran into a bunch of bugs and for some of the more basic tasks we had to run scripts within the box. Junipers plans to End of Life the Netscreen products did not go well and even today many of the ScreenOS products are still available for sale.

Check Point, although still around today, had completely fallen off the radar. We rarely ran into them  and when we did, displacing them was not difficult due to their complexity and pricing.

Cisco had once again started to catch up with both their Router based Firewalls and their ASA firewalls. Between these two products and the fact that Juniper was still selling the Netscreen products we had good solid solutions through the 2000’s.

UTM (Unified Threat Management) and NGFW (Next Generation FireWall) is the next phase in the evolution of Firewalls. Integrating  URL Filtering, Application Control, IPS, AntiX and in some cases DLP into one appliance is the new way to go. This is where we now see the likes of Fortinet, Palo Alto, SonicWall all making headway. Cisco have once again fallen behind in this technology and are scrambling to catch up (More on this below). Dells acquisition of SonicWall has helped  them considerably from both a marketing standpoint and probably pumping a lot of Money into R&D. Fortinet is a solid product that works well with all of these services enabled. To date our experience with Fortinet Technical support and the RMA process has been positive. We have had some experience with Palo Alto and SonicWall and they are also good units. My problem with Palo Alto is that I can’t get them to call me back after contacting them to talk about a partnership. Not a good start to the relationship and it puts a bad taste in my mouth as to the level of support we would be getting.  My issue with SonicWall is that because they are under the Dell brand there are really no margins to be had. I know they are good firewalls, but are they better than Fortinet and Palo Alto – not really. All of these products do what they say, some have features that others don’t but overall they are all very similar. So, in the end what it comes down to now is our ability to manage and maintain a network effectively. So, even though Cisco lacks features that others may have, we know what to expect from support and product replacement. As I said Fortinet are also good in this respect. Juniper have always had good support, but the SRX fiasco leaves them far behind.

Back to Cisco and their NGFW. Unfortunately Cisco have done the bolt on method again. I love their products as they are well built and their support is still better than their competitors. However, to manage and deploy a Cisco ASA with NGFW requires two management interfaces. one for the Firewall and one for the NGFW Services. This is not something a Network Manager wants to deal with.

Through all my experience with all of these products one thing is still true. No one box from any vendor does it all perfectly. We still require deployments that without a Cisco Router would be near impossible. Many times the Cisco Router sits in parallel with a Juniper or Fortinet or ever the Cisco ASA.

No magic bullet – sorry…..