Virus, pagan rescate? Pierden datos? ¡El remedio está aquí!

Esto no es publicidad! Esta es la realidad. Las compañías son afectadas por viruses todo el tiempo. Necesitas un equipo diligente para asegurar tu infraestructura 24×7 para poder sentirte seguro.

A pesar de sentirte seguro, de todas formas eres impactado por algún virus y tus datos están expuestos. Veamos cómo se ve este círculo vicioso donde un virus te impacta, despues se propone y integra la solución:

Virus and data loss cycle

Entonces, un virus se crea, publica y se difunde rápidamente, el mismo virus llega a tu entorno, tus sistemas están bajo ataque. Algunos datos se cifran, tu negocio pierde el acceso a esos datos. Mientras tanto, los mejores del mundo están buscando documentar pasos para protegerte de este virus. Tu equipo de TI, implementa medidas para protegerse de ese virus y su propagación.

Entonces te das cuenta de que tiene muchas unidades de negocio que piden acceso a sus datos o te comienzan a informar en una frecuencia alarmante que perdieron sus datos.

El resultado termina con tu equipo pagando el rescate (chantaje) por algunos conjuntos de datos o pidiendo a las diferentes unidades de negocio que re generen sus datos.

¡Ahora te haz dado a conocer como el líder de TI quien no puede proteger a su compañía de este virus ni recuperar sus datos! Este ciclo interminable es el de un nuevo virus, un momento de gran dolor, confusión y pérdida de datos, seguido por un parche de antivirus.  Sin embargo, el ciclo continuará la misma secuencia de eventos.

Ahora, veamos el siguiente ciclo:

break virus and data loss cycle

El virus golpea como siempre y de nuevo entro a tu entorno, pasando por alto las medidas anteriores de protección antivirus. Esta es la naturaleza de nuestro mundo. Pero esta vez tienes una capa de seguridad adicional. Tienes una solución de copia de seguridad y restauración (respaldos) para ayudarle a no pagar el rescate (chantaje) y podrás reiniciar tu negocio desde un punto aceptable en el tiempo. En este ciclo fuiste capaz de recuperar tus datos de negocio, seguir operando y no pagaste a ningún extraño ninguna cuota!

Echemos un vistazo a la importancia de datos en el negocio a lo largo del tiempo y cómo este ciclo de protección contra virus y el impacto de viruses puede afectar o no.

Data Importance vs time and data loss

Te pido paciencia, sé que el gráfico tiene muchos elementos. Voy a explicarlo en detalle.

Horizontalmente tenemos tiempo, verticalmente tenemos la cantidad de datos que crecen y con ella la importancia de esos datos para el negocio. En el lado derecho tenemos una etiqueta para líneas horizontales denominadas datos expuestos.

A medida que avanza el tiempo, creamos datos, en un momento dado estamos siendo atacados por viruses. Todo el conjunto de datos está expuesto a ser perdido, dañado, infectado. Sin embargo, es posible que tenga la suerte de que no todos los datos se vean afectados y sólo el conjunto de datos en el círculo ROJO es la pérdida de datos reales.

Esto ocurre durante el tiempo de vulnerabilidad. El tiempo de vulnerabilidad está representado en un color mostaza cremoso. A continuación, se procede a parchar contra el virus. Sólo para reiniciar el ciclo.

Sin embargo, tenga en cuenta que no está en control o puede predecir cuáles datos se ven afectados. Así que estos datos podrían ser muy relevantes para su día a día y las operaciones o puede ser un conjunto de datos más antiguos con menos impacto en su negocio inmediato.

Veamos un entorno diferente en el que ha implementado un medio para recuperar sus datos.

Data Importance vs time and data Protection

Al igual que con el gráfico anterior, tu entorno es expuesto a un nuevo virus y durante el tiempo de vulnerabilidad son capaces de restaurar los conjuntos de datos porque ha implementado servicios de copia de seguridad (respaldo) y restauración. ¡Esto es independientemente de que el conjunto de datos sea nuevo, viejo, ambos o todos! Durante ataques, infecciones, parches y correcciones, podrás restaurar tu data desde un punto anterior. Esto le dará la confianza de que tu negocio persistirá y soportará un ataque de virus.

¿Qué debería proteger?

Servidores físicos, servidores virtuales, ordenadores portátiles, sus plataformas SaaS (Google, 365 e incluso Salesforce). También deberían estar pidiendo DRaaS. Recuperación de Desastres como Plataformas de Servicio. Esto le protegerá de un sitio entero que está abajo por cualquier razón o aún un ataque masivo donde todos sus sistemas se afectan al punto donde podría ser mucho más simple dar click a un failover.  Donde un sitio remoto se convierte el sitio primario.

Si es que no sabes por dónde empezar, envíeme un correo electrónico a jcalderon@KIONetworks.com

Saludos!

Julio Calderon

Twitter: @JulioCUS

Skype: Storagepro

Email: Jcalderon@kionetworks.com

 

Viruses, Paying Ransom? Losing data? The how to cure is here!

This is not hype! This is reality. companies get hit with viruses all the time. You need a diligent team securing your infrastructure 24/7 in order for you to feel safe. Then you get hit with a virus and loose data.  Let us see what that vicious cycle looks like below:

Virus and data loss cycle

So, a virus comes out in the wild, you get hit with the virus, your systems are under attack. Some data gets encrypted, your business losses access from that data. In the meantime, the best of the world are looking at documenting steps on how to protect from this virus. Your IT team, implements steps to protect from that virus and its spread.  Then you realize that you have many business units asking for access to their data or reporting in an alarming rate that they lost their data.  The aftermath ends with you paying the ransom for some data sets or asking your staff to recreate their data. You have now become known for the leader who could not protect his or her company from this virus nor recover their data!

This endless cycle is that of a new virus coming up, a time of major pain, confusion and data loss, followed by a virus patch.  To then be followed by the same sequence of events.

Now, lets look at the following cycle:

break virus and data loss cycle

Virus does hit like always, bypassing the previous measures of antivirus protection. This is the nature of our world.  But you have a safety net.  You have a backup and restore solution to help you not pay the ransom and to restart your business from an acceptable point in time.  You have been able to recover your business data, continue to operate and not paid any stranger any fee!

Lets look at both, your Data Importance in business over time and how this ongoing virus and virus protection cycle can affect you or not.

Data Importance vs time and data loss

Bare with me, I know the graph looks a bit busy. I will explain in detail.  Horizontally we have time, vertically we have the amount of data growing and with it the importance of that data  for business. At the right side we have a label for horizontal lines named data Exposed.  As time progresses, we create data, at a given time we are hit by a virus. The entire data set is exposed to be lost, corrupted, infected.  However you might be lucky enough that not all data is impacted and only the data set in the RED circle is the actual data loss. This happens during the time of exposure. The are is depicted in a creamy color.  We then proceed and patch for the virus. Only to start the cycle again. However, note that you are not in control or can predict what data becomes impacted. So this data could be VERY relevant to your day to day operations or it might be an older data set with less impact in your immediate business.
Now, lets look at a different environment where you have implemented a means to recover your data.

Data Importance vs time and data Protection

As with the previous graph, you get hit with the virus and during the time of exposure you are able to restore data sets because you have implemented backup and restore services in your environment.  This is regardless the data set being new, old, both, or all! During attacks, infections, patches and fixes, you will be able to restore your data set from a previous point in time. This will give you the confidence that your business will persist and endure a virus attack.

What should you be protecting?

Physical servers, Virtual Servers, Laptops, your SaaS platforms (Google, 365 and even Salesforce) . You should also be asking for DRaaS. Disaster Recovery as a Service platforms.  This will protect you from an entire site being down for whatever reason or even a massive attack where all your systems are impacted to the point where it might be a lot simpler to flip the fail over switch than a full restore.

If you do not know where to start, email me at jcalderon@KIONetworks.com

Best Regards,

Julio Calderon

Twitter: @JulioCUS

Skype: Storagepro

Email: jcalderon@kionetworks.com

6 Tips To Make Your OpenStack Enterprise Ready.

 

How to Make Your Openstack Environment Enterprise Ready. 6 Tips.

At a baseline, let’s first come to an agreement of what “Enterprise Ready” means. As a storage consultant and IT generalist with a specialty in cloud architecture, I would define enterprise ready as an environment with the following characteristics:

Predictable

No surprises here: we know and understand the environment’s behaviors during any stress point.

Available

Availability, measured in uptime, indicates how many nines are supported and in general the practices that need to be in place to guarantee a highly available environment.

Fast

The performance of the environment should be dependable and we should be able to set clear expectations with our clients and know which workloads to avoid.

Well Supported

There should be a help line with somebody reliable to back you up in knowledge and expertise.

Expandable

We should know where we can grow and by how much.

Low Maintenance

The environment should so low-maintenance as to be a “set it and forget it” type of experience.

How to Get There: Artificial Intelligence

Now that we know the characteristics and their meanings, the question is, how do we make our open source environment enterprise ready? Let’s take it one at a time. Hint: artificial intelligence can help at every turn.

Predictable

To make your OpenStack environment enterprise ready, you need to perform a wide range of testing to discover functionality during issues, failures, and high workloads. At KIO Networks, we do continuous testing and internal documentation so our operations teams knows exactly what testing was done and the environment’s behavior.

Artificial Intelligence can help by documenting historical behavior and predicting potential issues down to the minute that our operations team will encounter an anomaly. It’s the fastest indication that something’s not running the way it’s supposed to.

Available

To test high availability, we perform component failures and document behavior. It is important to fail every single component including hardware, software, and supporting dependencies for the cloud environment like Internet lines, power supplies, load balancers, and physical or logical components. In our tests, there are always multiple elements that fail and are either recovered or replaced. You need to know your exposure time: how long does it take your team to both recover and replace an element.

AI-powered tools complement traditional monitoring mechanisms. Monitoring mechanisms need to know what your KPIs are. From time to time you may encounter a new problem and need to establish a new KPI for it alongside additional monitoring.  With AI, you can see that something abnormal is happening and that clarity will help your administrators hone in to the issue, fix it, and create a new KPI to monitor. The biggest difference with an AI-powered tool is that you’re able to do that without the surprise outage.

Fast

Really, this is about understanding speed and either documenting limitations or opting for a better solution. Stress testing memory, CPU, and storage IO is a great start. Doing so in a larger scale is desirable in order to learn breaking points and establish KPIs for capacity planning and, just as important, day-to-day monitoring.

Do you know of a single person who would be able to manually correlate logs to understand if performance latency is improving based on what’s happening now compared to yesterday, 3 weeks ago, and 5 months ago? It’s impossible! Now, imagine your AI-powered platform receiving all your logs from your hardware and software. This platform would be able to identify normal running conditions and notify you of an issue as soon as it sees something unusual. This would happen before it hits your established KPIs, before it slows down your parallel storage, before your software-defined storage is impacted, and before the end user’s virtual machine times out.

Well Supported

We emphasize the importance of continuously building our expertise in-house but also rely on certain vendors as the originators of code that we use and/or as huge contributors to open source projects. It’s crucial for businesses to keep growing their knowledge base and to continue conducting lab tests for ongoing learning.

I don’t expect anyone to build their own AI-powered platform. Many have done log platforms with visualization fronts, but this is still a manual process that relies heavily on someone to do the correlation and create new signatures for searching specific information as needed.  However, if you are interested in a set of signatures that’s self-adjusting, never rests, and can predict what will go wrong, alongside an outside team that’s ready to assist you, I would recommend Loom Systems. I have not found anything in the market yet that comes close to what they do.

Expandable

When testing growth, the question always is, what does theory tell you and what can you prove? Having built some of the largest clouds in LATAM, KIO knows how to manage a large-volume cloud, but smaller companies can always reach out to peers or hardware partners to borrow hardware. Of course, there’s always the good, old-fashioned way: you buy it all, build it it all, test it all, shrink it afterwards, and sell it. All of the non-utilized parts can be recycled to other projects. Loom Systems and its AI-powered platform can help you keep watch over your infrastructure as your human DevOps teams continue to streamline operations.

Low Maintenance

Every DevOps team wants a set-it-and-forget-it experience. Yes, this is achievable, but how do you get there? Unfortunately, there’s no short cut. It takes learning, documenting, and applying lessons to all of your environments. After many man hours of managing such an environment, our DevOps team has applied scripts to self-heal and correct, built templates to monitor and detect conditions, and set up monitors to alert themselves when KPIs are being hit. The process is intensive initially, but eventually dedicated DevOps teams get to a place where their environment is low maintenance.

The AI-powered platform from Loom Systems helps you by alerting you of the unknown. Your team will be shown potential fixes and be prompted to add new fixes. As time goes by, the entire team will have extensive documentation available that will help new or junior admins just joining the team. This generates a large knowledge base, a mature project, and also a lower-maintenance team.

All serious businesses should enjoy the benefits of running a predictable, highly available, fast, well supported, easily expandable and low-maintenance environment.  The AI-powered platform built by Loom Systems takes us there much faster and gives us benefits that are usually reserved for huge corporations. Just as an example, if you’re the first in the market offering a new product or service, you can feel confident with Loom Systems that they’ll detect problems early and give you actionable intelligence so you can fix them with surgical precision.

It’s been a pleasure sharing my learnings with you and I look forward to hearing your feedback. Please share your comments and points of view – they’re all welcome!

 

Best Regards,

Julio Calderon

Twitter: @JulioCUS

Skype: Storagepro

Email: jcalderon@kionetworks.com