Cisco’s Umbrella – Another Effective Layer of Security

umbUp and running for only 20 days, Cisco Umbrella has protected us from 358 potential security issues. Diving deeper into the actual events show that many of these events are potentially dangerous sites, but better safe than sorry.

The most compelling aspect of this product is that it works in the cloud, before the data even gets to you. Most Web Filtering security solutions work at the perimeter level, meaning the data gets to your firewall and then it is blocked. Umbrella, does this at the DNS layer in the cloud, ultimately cutting down on your own bandwidth usage and providing security. two birds with one stone.

While I wouldn’t suggest you go and throw out any of your other security solutions, Umbrella can be a great add on to your overall security strategy.

Feel free to contact me to discuss further.

Monitoring Wireless Capacity

In my last post, I talked about wireless network challenges, what to look for and how to plan properly for a deployment. I talked about planning for capacity to ensure you don’t go over a certain number of users per AP.

So, the next challenge becomes, how do I ensure that as I grow I don’t begin to exceed the optimal number of users per AP.

This is where advanced network monitoring can help mitigate issues before they become a problem. In the past a network monitor would poll or ping an access point to ensure it is available on the network. Although this is helpful it does nothing to monitor capacity.

Capacity planning is critical to any network management system. Bandwidth, CPU and Memory needs to be monitored on all your network devices. Each configured with a baseline that will alert you when that baseline is exceeded.

Recently we added some new capabilities to our Network Management System to cover Wireless Capacity Monitoring. Our monitors allow me to set the number of associated users threshold to the number of my choosing, either per AP, per Controller or any combination thereof. If the threshold is reached I can either send an email, log, open a ticket in our system, run a WEB service to another system, run a SPROC or do any combination of the above.

For our customers this will ensure a positive wireless experience. For us, it will help cut down on calls to our NOC regarding wireless performance issues, because we will be dealing with them before they become an issue.

This kind of monitoring is critical in less static environments like boardrooms, public areas with guest access and retail environments. In static environments where you know the number of users it may be less critical, but as users may move around, change their daily patterns, or over time you hire more staff, these changes can overload one AP, affecting the user experience and possibly productivity.

Setting all of this monitoring up may be time consuming in the short term, but can save you hours and hours of troubleshooting in the future.

Wireless Networking Challenges

Not too many people are plugging their laptop into an ethernet cable anymore. In fact, just about everyone in our office relies on wireless for their connectivity. In the past, wireless was too slow and somewhat unreliable, but it has come a long way and the convenience of not having to plug in far outweighs the performance impact if any.

Coverage is obviously one of the key elements for a good wireless deployment. It needs to work in your office, in the boardroom, in the lunch room and maybe even at the picnic table just outside your building. Ideally it should work anywhere your phone, tablet or laptop goes.

What gets missed quite often is planning for capacity. Coverage ensures there is a signal, but each access point can only service so many clients before it becomes slow, unresponsive and ultimately useless. It is also important to understand the applications that will be used over the wireless to get an idea of how many users per AP is ideal.

Some vendors make a recommendation of 20-25 users per AP. This is probably a good number if they are web browsing and checking email, anything more and I would suggest you will run into problems. In some cases, where large files are being saved to servers on a regular basis it is advisable to stick with ethernet. Overall however, I would suggest that you don’t want anymore than between 10-16 users per AP.

Interfering APs may also have an impact on your deployment. In some cases I have seen an AP detect up to 59 neighboring APs. This can cause havoc with your deployment. Site surveys prior to your deployment can certainly help mitigate this, but remember that a site survey is done at a point in time. If there is a new office building going up next door, you can expect more interference in the near future. Site surveys are good for determining the most effective placement of your APs and some tools will help you plan based on capacity as well.

When APs were standalone the deployments were much more complex than they are today with Controller based APs. The controller centralizes the configurations and pushes them out the the APs. Since the controller has a holistic view of the entire network, it can instruct the APs to make channel adjustments without affecting its neighboring APs. One of my favorite features in a Controller based deployment is the ability to detect rogue on-wire APs and even block any clients from joining them. A rogue on-wire access point is a AP that has been installed on the LAN via ethernet, but is not part of the controller based system. When configured, the controller will sent out disconnect messages to any clients that attempt to join the rogue AP.

My only complaint with a controller based deployment is that the cost is much higher than a standalone deployment. The Controller based AP is the same cost as a Standalone AP, but the controller hardware and licensing is extra.

The list of environmental challenges that can affect your wireless deployment is endless. Elevators, Microwaves, Cordless Phones, Water, Steel, Concrete, Small Rocks, you name it. They can all have an effect.

And of course security. One of the most important aspects of a good wireless deployment is ensuring only you and your staff can use it. A good deployment will have LDAP or RADIUS integration. If security is top priority then you should consider coupling the LDAP or RADIUS with a second factor, using key fobs or software that provides OTP (one time passwords).

The same AP’s that access your corporate network can also provide guest access. When providing guest access you can make it difficult so that only people you authorize can use it, or you can make it simple and provide a splash page where guest users are asked to provide an email address or simply agree to the terms of usage.



NGFW and UTM, What is the difference?

Over the last week or so I have been researching and trying to find the difference between NGFW (Next Generation Firewall) and UTM (Unified Threat Management). I came across some great blogs that helped me cut through the marketing hype.

In this blog the Author makes some great points that essentially argue that there is no difference. As I read through the comments on the blog, it was not so clear, as many argued that there is a big difference.

When I looked up the definition of NGFW and UTM in Wikipedia to get a baseline as to where I would end up on this argument, it solidified in my mind that these are in fact the same thing.

Gartner states an NGFW should provide:

  • Non-disruptive in-line bump-in-the-wire configuration
  • Standard first-generation firewall capabilities, e.g., network-address translation (NAT), stateful protocol inspection (SPI) and virtual private networking (VPN), etc.
  • Integrated signature based IPS engine
  • Application awareness, full stack visibility and granular control
  • Capability to incorporate information from outside the firewall, e.g., directory-based policy, blacklists, white lists, etc.
  • Upgrade path to include future information feeds and security threats
  • SSL decryption to enable identifying undesirable encrypted applications



UTMs represent all-in-one security appliances that carry a variety of security capabilities including firewall, VPN,   gateway anti-virus, gateway anti-spam, intrusion prevention, content filtering, bandwidth management, application   control and centralized reporting as basic features. The UTM has a customized OS holding all the security features   at one place, which can lead to better integration and throughput than a collection of disparate devices.


Now there may be some subtle differences here, but for the most part the two provide the same set of features. It seems to me that the main argument for the difference between the two is that the NGFW is a more robust engine and that it won’t suffer the performance impacts that a UTM would.

This becomes even more confusing when we look at the Gartner Magic Quadrant for the two.








Palo Alto seem to be the only NGFW (or at least of any significance) to not be in the UTM Category. And how is it that Fortinet is both a UTM and NGFW, but is not as good at being a NGFW.

If there is in fact a difference between the two, then one product cannot be both, can it?

My conclusion therefore is that they are the same.. Some may be better than others, but they are essentially equal in features.

Your Thoughts??



A brief history of Firewalls and the current state of affairs

Back in about 1996 when the Internet was still young and many of us were scrambling to figure out what it was all about and how it worked, firewalls were big and expensive. Many customers  didn’t see the need to have one and would say things like “why would anyone want to attack my company”. Selling security is always a tough, as you are not selling something that will help grow a business or improve processes, you are selling piece of mind. In those days there weren’t that many security vendors to choose from. The big names were Check Point, Cisco and Novel. Oh, that’s right I almost forgot – does anybody remember SHIVA….

Check Point was clearly the front runner with their Firewall-1 product, but was also the most complex and expensive. Novel’s BorderWare was popular due to the large install base and popularity of NetWare and Cisco was the trailer with the Cisco PIX. I think the first model was the PIX 10000. A 4U Appliance with two 10 Meg interfaces complete with floppy drive for upgrades.

We stayed away from BorderWare, as we were already down the TCP/IP path and had distanced ourselves from Novel’s NetWare previously. Our first Check Point deployment was a nightmare. Running on top of WinNT required driver upgrades, registry changes and took forever to get working. I always felt like it was hanging from a shoe string and could blow up at any minute. The PIX on the other hand was almost too easy. Power it up, connect a console and enter 5 commands to get it working. Seriously – 5 commands.

Inside address
Outside address x.x.x.x
Global 1 x.x.x.x
NAT (inside) 1
Route x.x.x.x

This was enough to secure the inside and let users out to the Internet.

Other than our first nightmare, there were many other reasons we did not go with Check Point. There licencing was very confusing, there pricing was very high and at that time they were software only and did not have an appliance solution. Later Check Point partnered with Nokia to deliver an appliance, but that was even more of a licensing nightmare. additionally you had to manage the routing and interfaces via the Nokia engine and the Check Point  was a bolt on. So, we happily sold the PIX and became experts in the field of Firewalling and NAT.

Then came IPSec… Cisco were slow to respond and there first implementation (ver 5.0) in the PIX worked, but was not very secure. The tunnels terminated on the outside interface and you needed to create Conduits (that was the PIX’s term for ACL) into the internal network. The problem here was that These Conduit referenced the LAN IP’s at both the remote and local network. As a test I connected to out upstream router, created a loopback address that matched the remote LAN and then telnet’d from the loopback through the PIX into the local network.

Nortel came out with the Contivity Appliance for IPSec tunnels for both Site to Site and remote access. Clearly they were a market leader in this area. Cisco acquired Altiga and came out with the VPN 3000 Series Concentrator. Interestingly, the interface on the VPN3000 was very similar to the Nortel. Possibly they were both creations from the same technology. I can tell you that we still have a Contivity and VPN 3000 running in our network and they serve a purpose.

Cisco had also built Firewalling and IPSec capabilities into their routers. Cisco’s Firewall implementation was called CBAC (Context Based Access Control) and was relatively easy to configure and manage. I should point out that Access Control List, the basis for any firewall configuration had been around long before Firewalls. An ACL on it’s own can block traffic, but it cannot dynamically allow traffic in the return direction unless there is an ACL that already allows that traffic to return. The Firewalling component to the ACL is that it tracks the state of the connection and dynamically creates ACL entries for return traffic.

Many other vendors started to show up in the market and although I am not clear on the timing, Netscreen appeared in 1997 and quickly became a market leader in FW and VPN technologies, so much so that Juniper bought them for $4 Billion in 2004. We quickly jumped onto the Netscreen bandwagon as Cisco had started to fall behind in some keys areas for Firewalling and VPN’s. Netscreen’s were easy to configure, easy to manage, had both a CLI and GUI, and were more cost effective that the equivalent Cisco appliances.  Other vendors we started to see in the late 90’s were WatchGuard and SonicWall, and although we ran into them from time to time they were of little threat as they did not provide the features and functionality of the bigger players.

It is fascinating to me as to how small the industry really is, as these founders and leaders jump around from company to company reinventing the same product. For example.

One of the three Netscreen Founders left after only three years and founded Fortinet. Netscreen acquired One-secure who’s founder was a Check Point Engineer that later went on to create Palo Alto. This type of activity is common and it is no wonder there are so many competitors out there today.

For the first few years after Juniper acquired Netscreen, it was business as usual and we were enjoying designing and installing quality networks, for which Juniper/Netscreen were a big part. Juniper then made what I would consider a huge mistake. They determined that “One OS” built on the JUNOS platform was more important than enhancing the capabilities of the existing ScreenOS products. Certification requirements quickly changed and our organization was expected to drop everything and get all our techs up to speed on JUNOS. Before jumping in with both feet I had Juniper send me a couple of the new SRX Platforms running  JUNOS for testing. The SRX were suppose to be the replacement to the Netscreen FW’s. The SRX was not ready for prime time. There we features “not available yet”, we ran into a bunch of bugs and for some of the more basic tasks we had to run scripts within the box. Junipers plans to End of Life the Netscreen products did not go well and even today many of the ScreenOS products are still available for sale.

Check Point, although still around today, had completely fallen off the radar. We rarely ran into them  and when we did, displacing them was not difficult due to their complexity and pricing.

Cisco had once again started to catch up with both their Router based Firewalls and their ASA firewalls. Between these two products and the fact that Juniper was still selling the Netscreen products we had good solid solutions through the 2000’s.

UTM (Unified Threat Management) and NGFW (Next Generation FireWall) is the next phase in the evolution of Firewalls. Integrating  URL Filtering, Application Control, IPS, AntiX and in some cases DLP into one appliance is the new way to go. This is where we now see the likes of Fortinet, Palo Alto, SonicWall all making headway. Cisco have once again fallen behind in this technology and are scrambling to catch up (More on this below). Dells acquisition of SonicWall has helped  them considerably from both a marketing standpoint and probably pumping a lot of Money into R&D. Fortinet is a solid product that works well with all of these services enabled. To date our experience with Fortinet Technical support and the RMA process has been positive. We have had some experience with Palo Alto and SonicWall and they are also good units. My problem with Palo Alto is that I can’t get them to call me back after contacting them to talk about a partnership. Not a good start to the relationship and it puts a bad taste in my mouth as to the level of support we would be getting.  My issue with SonicWall is that because they are under the Dell brand there are really no margins to be had. I know they are good firewalls, but are they better than Fortinet and Palo Alto – not really. All of these products do what they say, some have features that others don’t but overall they are all very similar. So, in the end what it comes down to now is our ability to manage and maintain a network effectively. So, even though Cisco lacks features that others may have, we know what to expect from support and product replacement. As I said Fortinet are also good in this respect. Juniper have always had good support, but the SRX fiasco leaves them far behind.

Back to Cisco and their NGFW. Unfortunately Cisco have done the bolt on method again. I love their products as they are well built and their support is still better than their competitors. However, to manage and deploy a Cisco ASA with NGFW requires two management interfaces. one for the Firewall and one for the NGFW Services. This is not something a Network Manager wants to deal with.

Through all my experience with all of these products one thing is still true. No one box from any vendor does it all perfectly. We still require deployments that without a Cisco Router would be near impossible. Many times the Cisco Router sits in parallel with a Juniper or Fortinet or ever the Cisco ASA.

No magic bullet – sorry…..





Command Line VS Graphical Interface

Early on in IT the only one way to manage a system, be it a MUX, Controller, Server or Modem was via the Command Line Interface (CLI). Windows and MAC’s were our first real introduction into GUI (Graphical User Interface) based management and for some tasks it was a lot easier to deal with. The younger generation has come to expect GUI systems and probably find CLI cumbersome and archaic, but the older generation, the generation that grew up with Unix, Cisco IOS and a number of other systems that don’t exist anymore, will probably tell you that CLI is the only way to go.

I sit somewhere in the middle. While I recognize the flexibility and speed that a CLI can give you, I sometimes yearn for the colours and charts that can make quick work of a task that requires a number of memorized commands with outputs that are sometimes hard to read. I have been configuring and Managing Cisco Routers and Switches for well over 15 years, so when it comes to Cisco IOS, Command Line is the obvious choice. It does require a fair amount of memory work but when you do it day in and day out it becomes second nature. Cisco attempted a few GUI systems for the old PIX and now ASA Firewalls, but I can tell you, as crazy as a complex Firewall configuration can get, the GUI system is that much more complex. A poorly written GUI can be even more confusing than CLI.

I recently worked with a customer that, for the most part, was not comfortable with CLI, and since we were looking at installing a number of Cisco switches in their network, I took the opportunity to give the customer a really quick Cisco CLI overview. As I said, Cisco CLI is second nature to me, but in listening to the reaction of the customer I realized how intuitive Cisco’s CLI really is.

Back before Juniper bought Netscreen, we were big fans of Netscreen’s ScreenOS. Not exactly like Cisco, but very intuitive. Juniper bought Netscreen and for a while it was status quo, but then Juniper decided to push the SRX series, based on the JunOS software. Now, I can use JunOS, but is it intuitive? Is it easy to use? NO and NO… I know there are a lot of JunOS fans out there and I am by no means knocking the product or its capabilities, just pointing out that in my opinion the interface for the CLI is not up to snuff.

The CLI is harder to learn, requires a certain amount of memorization and probably requires you to get a better understanding of the technology and what it is doing. Sometimes you can get something working via a GUI just be clicking around until it works. Certainly not recommended, but I’ve see it in action.  Whatever your preference, don’t ignore the CLI. In most cases what you do in the GUI gets translated into commands via CLI behind the scenes. If you are unclear how to use the CLI on any particular system, I recommend using the GUI to set it up, then going back to the CLI to see what those GUI clicks did to the configuration in the CLI.

Meraki and Cloud Management

The cloud is taking over IT…. What are we all going to do when everything is cloud managed? Will we all be out of a job?  It is true the the Cloud is becoming a big thing, but like all changes in technology the devices and applications in the cloud still need to be managed and maintained by someone.

Meraki have taken the Cloud  a step further by providing a Cloud Managed Network. This is a great concept and can really simplify the deployment process. The Piece that is missing from this concept is that the Cloud doesn’t actually manage the network. The skilled technical folks that have that ability to isolate a problem and solve it are still required, regardless of the platform for Management. Meraki have made it easier and certainly the less technical would find some benefits in the Meraki approach, but like so many technologies they are sometimes too complex to simplify, or the process of simplifying the technology leaves it missing some key elements.

Cisco’s purchase of Meraki  shows that Cisco really believe in the Cloud, but I am curios how they plan to integrate, if at all. Cisco’s attempt at moving down market with Linksys was a mistake, as I think most Cisco savvy folks would balk at any Linksys Product.

In looking at Meraki’s  products, it would appear that they are well ahead of any Linksys product and slightly behind Cisco Traditional IOS products from a feature standpoint  but obviously well ahead of any Cisco product from a Network Management standpoint, an area that I think Cisco have always struggled with. It will be interesting to see if Cisco try and merge their IOS into Meraki’s cloud Management.

Either way, this has next to no effect on the business of Network Management, as these networks still require Management, regardless of the management platform in which they reside.