A System Administrator’s Guide to Containers

Anybody who’s a part of the IT industry will have come across the word “container” during the course of his/her work. After all, it’s one of the most overused terms right now, which also indicates different things for different people depending on the context. Standard Linux containers are nothing more than regular processes running on a Linux-based system. This process category is separate from other process groups thanks to Linux security limitations, resource limitations, and namespaces.

Identifying the Right Processes

When you boot one of the current crop of Linux systems and view a process with cat /proc/PID/cgroup, it immediately becomes known that this process occurs in a cgroup. Once you take a closer look at /proc/PID/status, you begin to notice capabilities. Then you can view SELinux labels by checking out /proc/self/attr/current. Also, seeing /proc/PID/ns gives you a listed view of namespaces the process is currently in.

Thus, if a container gets defined as a process that has resource constraints, namespaces, and Linux security constraints, it can be argued that each process on the Linux system is present in a container. This is precisely the reason why Linux is often considered containers and vice versa.

Container Components

The term “container runtimes” refer to tools used for modifying resource limitations, namespaces, and security, and also for launching the container. The idea of “container image” was initially introduced by Docker, and pertains to a regular TAR file comprised of two units:

  • JSON file: This determines how the rootfs must be run, including the entrypoint or command required to run in rootfs once the container starts, the container’s working directory, environment variables to establish for that particular container, along with a couple of other settings.
  • Rootfs: This is the container root filesystem which serves as a directory on the system resembling the regular root (/) of the OS.

What happens is, Docker starts to “tar up” the rootfs while the JSON file develops the base image. The user is now able to install extra content in the rootfs, thereby forming a fresh JSON file, and then tar the variation between the actual image and the new picture with the updated JSON file. Thus, a layered image is created.

Building Blocks of Containers

The tools that are commonly used for forming container images are known as container image builders. In some cases, container engines are responsible for this task, but numerous standalone tools can also be found for creating container images. These container images or tarballs are taken by Docker, and then moved to a web service. This enables them to be later pulled by Docker, which also develops a protocol for pulling them and dubs the web service as container registry.

The term “container engines” are programs capable of pulling container images from the container registries and then reassembling them onto the container storage. If that’s not all, container engines are also responsible for launching container run times.

The container storage is generally a COW or copy-on-write layered filesystem. Once the container image gets pulled down from the container registry, the first thing that needs to be done is untar the rootfs so it can be placed on disk. In the event that multiple layers are present in the image, every single layer gets downloaded and then stored on a separate layer of the COW filesystem. This means every single layer contains a separately stored layer, which increases sharing for the layered images. Container engines tend to support multiple kinds of container storage, such as zfs, btrfs, overlay, aufs, and device-mapper.

Once the container engine has completed downloading the container image to the container storage, it must form a container runtime configuration. This runtime configuration is a combination of input from the user or caller as well as the content from container image specification. The container runtime configuration’s layout as well as the exploded rootfs are often standardized by the OCI standards body.

The container engine releases a container run-time that is capable of reading the container run-time specification, modifying the Linux cgroups in the process along with Linux namespaces and security limitations. Afterward, the container command gets launched to form the PID 1 of the container. By now, the container engine is able to relay stdout/stdin back the caller while gaining control over the container.

Please keep in mind that several of the container run-times get introduced for using various parts of the Linux so the containers can be isolated. This allows users to run containers with KVM separation. They are also able to apply hypervisor strategies. Due to the availability of a standard run-time specification, the tools may be launched by a single container engine. Even Windows may use the OCI Run-time Specification to launch Windows containers.

Container orchestrators are a higher level. These tools help coordinate the execution of containers on various different modes. They interact with the container engines for managing containers. Orchestrators are responsible for telling container engines to start the containers and wire networks together. They can monitor the containers and introduce ones as the load expands.

Benefits of Containers

Containers provide numerous benefits to enable the DevOps workflows, including:

  • A simple solution for consistent development, testing and production environments
  • Simpler updates
  • Support for numerous frameworks

When the user writes, tests and deploys an application within the containers, the environment stays the same at various parts of the delivery chain. This means collaboration between separate teams becomes easier since they all work in the same containerized environment.

When software needs to be continuously delivered, it requires application updates to roll out on a constant, streamlined schedule. This is possible with containers as applying updates becomes easier. Once the app gets distributed into numerous microservices, every single one gets hosted in a different container. If a part of the app gets updated by restarting the container, the rest of it remains uninterrupted.

 When performing DevOps, it helps to have the agility to switch conveniently between various deployment platforms or programming frameworks easily. Containers provide the agility since they are comparatively agnostic towards deployment platforms and programming languages. Nearly any kind of app may be run inside the container, irrespective of the language it’s written in. What’s more, containers may be moved easily between various kinds of host systems.

Concluding Remarks

There are plenty of reasons why containers simplify the DevOps. Once system administrators understand the basic nature of the containers, they can easily use that information when planning a migration at the organization.

Author: Rahul Sharma

Top DevOps tools for 2019

Software development has undergone a revolution of sorts thanks to the integration of Development and Operations. But if you’re unfamiliar with DevOps processes and still wish to enhance your existing processes, it can be quite challenging to figure out the best tool for your team. However, we’ve compiled a list of the 10 most effective DevOps tools in 2019 so you can make an informed decision and add them to your stack. Find more details below:

  1. Puppet
    This open source configuration management, deployment orchestration is ideal for managing various applications servers at the same time. Puppet provides a unified platform that can be used by the development team for automatic configuration and remediating sudden changes.
    The product solutions for this tool cover cloud services, networking systems, and applications. There are over 5,000 modules present and the best part is, it integrates with other useful DevOps tools. Manage different teams effectively with Puppet Enterprise that supports role-based access control and reports in real-time.
  2. Docker
    Docker is at the forefront of the containerization trend that has taken the IT industry by storm. This tool provides secure packaging, deployment, and execution of applications without being impacted by the running environment.
    Each application container holds the source code, run time, supporting files, etc. used to execute applications. Access containers with the Docker Engine and execute applications in a remote environment. Docker helps companies minimize infrastructure expenses.
  3. Ansible
    A simple but powerful IT configuration management and orchestration tool, Ansible is perfect for organizations needing a program that doesn’t guzzle up their device resources in the background. Ansible’s primary function is to push fresh changes within the present system along with the configuration of machines that have been recently deployed. Increasing scalability replication speed while reducing infrastructure costs are just two reasons why Ansible has become the go-to DevOps tool for many IT firms.
  4. Git
    Git is among the most well-known DevOps tools and with good reason. This distributed source code management tool has been a godsend for open source contributors and remote teams. It lets you monitor your development activity’s progress.
    Numerous versions of the source code may be saved with Git but you’re free to restore a previous version if required. The tool allows for extensive experimentation since you’re able to form individual branches and combine new features once they are ready.
    Integrating Git with the DevOps workflow requires you to host repositories so team members are able to push their work. Bitbucket and GitHub are two of the finest Git repository hosting services right now. Both offer amazing integrations.
  5. JFrog Artifactory
    This is the sole universal repository manager in the world whose clients comprise 70 percent of the Fortune 100. That gives JFrog Artifactory enough clout in the industry to fully support software developed in any language and be compatible with any technology. Developers enjoy the fact that this open source tool integrates with current ecosystems to support end-to-end binary management.
    JFrog works to hasten development cycles with binary repositories, forming a single place where teams can manage their artifacts efficiently. The tool is updated continuously and tracks artifacts from the development phase till version control.
  6. Chef
    Chef is used for data management, roles, attributes, and environments. This configuration management automation tool is quite powerful and allows you to generate code from infrastructure.
    Chef can easily be integrated with cloud-based platforms and supports others such as FreeBSD, AIX, and RHEL/CentOS. This open-source tool also benefits from the support offered by an active, fast-growing and smart community.
  7. Bamboo
    This popular DevOps tool is a CI/CD solution meant for delivery pipeline automation, from deployment to builds. Considering how Bamboo is not an open source software, companies should consider their goals and budgets before investing in this tool.
    However, once a company does opt for Bamboo, they will benefit from numerous pre-built functionalities. That’s the reason why the number of plugins is less compared to other DevOps tools. Seamless integration of Bamboo is possible with other Atlassian products, like Bitbucket and Jira.
  8. Jenkins
    This tool is prized by software developers for its ease-of-use. Compatible with Linux, Mac OS X, and Windows, Jenkins lets you automate various stages of the delivery pipeline while monitoring the execution of the repeated tasks. The plugin ecosystem for Jenkins is quite vast and varied, making it easier to pinpoint issues in a specific project.
  9. Sentry
    Sentry’s clients include the likes of Microsoft and Uber, so that should tell you everything worth knowing about this bug or error detection DevOps tool. The open source tool supports languages like IOS, Ruby, JavaScript, and others, and contains in-built SDKs which are customizable for supporting the majority of frameworks and languages. The tool constantly scans lines of code throughout the whole system and pushes notifications when a problem or error is detected. Suitable solutions may be incorporated using a single click.
  10. Nagios
    This free DevOps monitoring tool helps you keep an eye on your infrastructure for locating and fixing issues. Nagios lets you record outages, failures, and events. It’s also great for tracking trends through reports and graphs, so you can predict errors and outages and locate possible security risks.
    The rich plugin ecosystem of this tool makes it a standout among the competition. The four monitoring solutions offered by Nagios include Nagios XI, Nagios Fusion, Nagios Log Server, and Nagios Core.
    Nagios is a great addition to any Development and Operations team due to its comprehensive infrastructure monitoring capabilities. However, keep in mind that the tool could take some time to set up properly as you first need to make it compatible with the environment. Concluding Remarks
    It’s 2019 and the DevOps market is currently booming. No wonder it has become one of the most competitive business segments this year with a fast rate of growth. Thanks to applications becoming increasingly complex, it is important for software companies to prepare for international market demands that require high-performance automation. Choosing the right DevOps tool is the only way to support the fast rate of business evolution.

The Evolution of Data Protection

Data has penetrated every facet of our lives. It has evolved from an imperative procedural function into an intrinsic component of modern society. This transformative eminence has introduced an expectation of responsibility on data processors, data subjects and data controllers who have to respect the inherent values of data protection law. As privacy rights continually evolve, regulators are faced with the challenge of identifying how best to protect data in the future. While data protection and privacy are closely interconnected, there are distinct differences between the two. To sum it up, while data protection is about securing data from unauthorized access, data privacy is about authorized access – who defines it and who has it. Essentially, data protection is a technical issue whereas data privacy is a legal one. For industries that are required to meet compliance standards, there are indispensable legal implications associated with privacy laws. And guaranteeing data protection may not comply with every stipulated compliance standard.

Data protection law has undergone its own evolution. Instituted in 1960s and 70s in response to the rising use of computing, re-enlivened in the 90s to handle the trade of personal information, data protection is becoming more complex. In the present age, the relative influence and importance of information privacy to cultural utility can’t be understated. New challenges are constantly emerging in the form of new business models, technologies, services and systems that increasingly rely on ‘Big Data’, analytics, AI and profiling. The environments and spaces we occupy and pass through generate and collect data.

Technology enthusiasts have been adopting new data management techniques such as ETL (Extract, Transform, and Load). ETL is a data warehousing process that uses batch processing and helps business users analyze data which is relevant to their business objectives. There are many ETL tools which manage large volumes of data from multiple data sources, manage migration between multiple databases and easily load data to and from data-marts and data warehouses. ETL tools can also be used to convert (transform) large databases from one format or type to another.

The Limitations of Traditional DLP

Quaint DLP solutions offer little value. Most traditional DLP implementations mainly consist of network appliances designed for primarily looking at gateway egress and ingress points. The cooperate network has evolved; the perimeter has pretty much been dissolved leaving network-only solutions that are full of gaps. Couple that with the dawn of the cloud and the reality that most threats emanate at the endpoint and you understand why traditional, network- appliance only DLP is limited in its effectiveness.

DLP solutions are useful for identifying properly defined content but falls short when an administrator is trying to identify other sensitive data, such as intellectual property that might contain schematics, formulas or graphic components. As traditional DLP vendors stay focused on compliance and controlling the insider, progressive DLP solutions are evolving their technologies; both on the endpoint and within the network to enable a complete understanding of the threats that target data.

The data protection criterion has to transform to include a focus on understanding threats irrespective of their source. Demand for data protection within the enterprise is rising as is the variation of threats taxing today’s IT security admins. This transformation demands advanced analytics and enhanced visibility to conclusively identify what the threat is and deliver the versatile controls to appropriately respond, based on business processes and risk tolerance.

Factors Driving the Evolution of Data Protection

Current data protection frameworks have their limitations and new regulatory policies may have to be developed to address emerging data-intensive systems. Protecting privacy in this modern era is crucial to good and effective democratic governance. Some of the factors driving this shift in attitude include;

Regulatory Compliance: Organizations are subject to obligatory compliance standards obtruded by governments. These standards typically specify how businesses should secure Personally Identifiable Information (PII), and other sensitive information.

Intellectual Property: Modern enterprises typically have intangible assets, trade secrets, or other propriety information like business strategies, customer lists, and so on. Losing this type of data can be acutely damaging. DLP solutions should be capable of identifying and safeguarding exigent information assets.

Data visibility: In order to secure sensitive data, organizations must first be aware it exists, where it exists, who is utilizing it and for what purposes.

Data Protection in The Modern Enterprise

As technology continues to evolve and IoT devices become more and more prevalent, several new privacy regulations are being ratified to protect us. In the modern enterprise, you need to keep your data protected, you have to be compliant, you have to constantly be worried about a myriad of like malicious attacks, accidental data leakage, BYOD and much more. Data protection has become essential to the success of the enterprise. Privacy by Design or incorporating data privacy and protection into every IT initiative and project has become the norm.

The potential risks to sensitive corporate data can be as tenuous as the malfunction of small sectors on a disk drive or as broad as the failure of an entire data center. When contriving data protection as part of an IT project, there are multiple considerations an organization has to deal with, beyond selecting which backup and recovery solution they will use. It’s not enough to ‘just’ protect your data – you also have to choose the best way to secure it. The best way to accomplish this in a modern enterprise is to find a solution that delivers intelligent, person-centric and dynamic data-centric fine-grained data protection in an economical and rapidly recoverable way.

Author: Gabriel Lando

Choosing The Right Cloud-to-Cloud Backup Vendor

Enterprises are moving their data and applications to the cloud with infrastructure-as-a-service (IaaS) and Software-as-a-Service (SaaS) usage rising steadily over the past couple of years. According to research firm IDC, more than half of organizations currently utilize some form of hybrid cloud configuration. IDC predicts that the cloud software market will grow to $151.6 billion by 2020 with five year CAGR of 18.6 percent – surpassing the growth of conventional software. This trend is largely being driven by the rising number of services and applications being delivered from the cloud. Cloud solutions are sometimes so fluid that end-users and IT teams assume it ‘simply works’, leaving crucial issues like data security entirely up to the provider.

Though cloud-based applications may be ‘safer’, they are not unassailable. You should be completely responsible for your SaaS-based data, including every aspect of its security. Backing up your SaaS data provides the continued benefits of the cloud while retaining a secure copy that is shut off from the SaaS environment. Regrettably, for IT decision-makers, the cloud-to-cloud market is somewhat immature and fragmented. Given the stark contrast between cloud computing environments, backup solutions similarly vary quite widely in capabilities. However, there are a couple of available options in the market and choosing the right one is an uphill task. Here are a couple of things to consider.

Backup and Restore Capabilities

Not all backup solutions are created equal. Since SaaS applications are offered via API or a website, the available backup procedures tend to vary, this creates a significant challenge for Backup as a Service (BaaS) providers. The ideal cloud-to-cloud backup solution should include a simplified and automated way to securely back-up your system data (including audit logs and metadata) from one cloud to another. It is also important to review the vendor’s disaster recovery capabilities before-hand. Ensure the solution offers granular recovery capabilities and robust search and browse features that can facilitate faster, self-service recovery/restore – as opposed to waiting for IT to respond, end-users can efficiently perform the recovery on their own.

Backup Frequency

While most SaaS backup solutions allow you to back-up your data at the click, not all of the offer it as an automated service. Ensure this option is available for your data security and ease so that your business operations and pace of growth remain unaffected. Some services will only offer preset options such as weekly, monthly or daily, others may enable you to custom set the intervals. Your business requirements should match the vendor’s available options. The cloud-to-cloud backup solution should also be capable of sending out notifications or alerts for failed backups. Though automation frees up your time and guarantees round the clock protection, the ability to force a manual backup will prove to be convenient when making extensive changes.

Security and Compliance

Data security remains one of the most critical aspects of a modern enterprise. So understanding the safeguards built into storing your backups is crucial. Go for a SaaS backup provider that provides robust encryption coupled with strict privacy policies to protect your sensitive data. The cloud-to-cloud backup solution should also be fully compliant with any regulations that may require you to meet specific standards in securing your data. Regulatory requirements can become an issue when cross-border data flows are involved. An organization can be held responsible for a data breach even if they aren’t aware wherein, the cloud their data is stored. Regulatory requirement that governs the timing of a permanent deletion of backed up data should also be put into consideration. Ensure the vendor can support your organization’s specific data-retention requirements.

Application Subscription Autonomy

A cloud-to-cloud backup vendor should have tools in place to handle the potential unavailability of the source SaaS application itself. For example, if an organization opted to cancel its G Suite subscription after using it for several years. Retaining that invaluable G Suite data will be a prime concern, so a good BaaS vendor should offer a path to data recovery, even if the source cloud subscription has been cancelled. Be sure to inquire about independent access when accessing vendors.

Cost Benefits

Regardless of the features or services, the cost will always be a constraining factor when selecting a BaaS provider. Remember the best solution for you is the one that fits your budget. On the other hand, expensive doesn’t always translate to quality, especially if you are paying for services that you aren’t fully utilizing. Analyze your data storage requirements for both now and in the future so that you can select a cost-effective backup solution. Each cloud-to-cloud backup provider has their own pricing model that is typically based on a per-user, per month/year, per-application basis. Don’t forget to make inquiries regarding hidden charges tied to things like software updates, customer support, or bandwidth, if any.

Having a backup of your SaaS data provides peace of mind and guarantees business continuity in the event of data loss. There are multiple cloud-to-cloud backup providers out there, it is therefore important spend time analyzing each of their pricing models and feature sets, to ensure they are capable of meeting all your backup needs.

Must-Have Windows System Admin Tools in 2018

Open source applications and tools simplify the lives of Windows system administrators considerably. You will find plenty of open source system admin tools that improve the performance and efficiency of system administrators. While some automate the basic administration functions, others help with troubleshooting and maintenance.


Thanks to the introduction of new technologies and web services, system administrators are keeping busy nowadays. Not only must they configure, upkeep, and ensure smooth operations of computer systems within a limited budget, but they must also contend with the growing number of digital threats, changing security policies, training, and technical support. No wonder these individuals need all the support they can get!

Thankfully, we’ve compiled a list of open source tools that will not only serve this purpose in 2018 but for the next few years as well.


  1. Git


System administrators will find it easier to handle projects of varying sizes with Git, an open source distributed version control tool. This free system is not only easy to use but fast and efficient. You get access to lots of handy features, such as staging areas, different workflows, enhanced GPG signing for commits and tags, colour controls, etc. for a more powerful performance. Thanks to Git, you don’t have to spend the whole day creating a test setup; you can simply develop a branch and then clone it. And thanks to the Change history option, the configuration changes can easily be monitored.

System administrators can now maintain numerous independent local branches due to Git’s branching model. Developing, merging, and deleting a particular takes just a few seconds. Plus, users can form a branch whenever they wish to test out a new idea, and delete it quickly in case it doesn’t live up to expectations. Perhaps the most surprising aspect is, Git’s internal data formatting is capable of supporting dates beyond 2100.


  1. Kubernetes

Google’s Kubernetes is an incredibly powerful system offering horizontal scaling features to Windows system admins. Now, depending on the CPU usage, they can control a user interface to scale the app up and down with a single command. Kubernetes is capable of automating functions like scaling, deployment, and management of containerized apps. Thanks to this tool, sysadmins can place containers as per their infrastructure and other requirements automatically without losing any of their availability.

Nodes are servers in Kubernetes that configure container networking and take care of assigned workloads. Using the Kubernetes, the nodes stay connected to the cluster group. When a container fails to respond to the user-defined source, it gets removed. And if a particular container fails, it is immediately restarted. Upon the death of nodes, they are replaced and rescheduled.



A unique IP address is assigned to containers with Kubernetes, while a set of containers gets one DNS name. So, creating clusters only requires two commands.


  1. Eclipse

One of the most commonly used integrated development environments (IDEs), Eclipse started off as a Java development tool but soon evolved into something that could be used to create apps in other programming languages, such as Perl, PHP, Python, C/C++. Eclipse’s cloud versions support web technologies, like HTML, CSS, and JavaScript. And system administrators are benefitted from the support of more than 250 open source projects, most of which are connected to development tools.


  1. Docker



Developed using open source technology, Docker addresses different kinds of infrastructure and applications for both developers and system administrators. Now, apps can be created easily, deployed, and then run in virtual containers with Linux servers. Due to the low overhead and small footprint, sysadmins enjoy plenty of flexibility and require fewer systems. If you are moderately skilled in developing software, Docker can be used to create Linux container easily. All that is required is a working Dockerfile and Docker setup.


There are two editions of Docker available – the Community Edition and the Enterprise Edition. While the former provides developers with the tools necessary to create applications, the latter offers multi-architecture operations to IT. Many big tech companies like Microsoft and Red Hat use Docker in collaboration with their services.


  1. PowerShell

This is a task-based scripting language and command-line shell developed by Microsoft and built using the .NET framework. System administrators use PowerShell to control and automate Windows administration. Loaded with amazing features, like Get-Module, Get-Help, remote management, among others, PowerShell allows system administrators to remotely manage and run Windows PCs, Windows Server, and PowerShell commands or access complete PowerShell sessions on Windows.


To use this remote management tool, you must download Remove Server Administration Tools Active Directory PowerShell module on your system along with the WinRM tool. Due to the Filter parameter, you can apply filters in PowerShell and locate what you’re searching for easily.


  1. NetBeans

A well-known open source IDE, NetBeans allows system administrators to develop mobile, web, and desktop applications quickly and easily. The major features include code generating, code editing, debugging tools, a GUI builder, etc. NetBeans supports JavaScript, HTML5, Java, PHP, and C/C++. The small size of this admin tool makes installation simple and convenient; all you need to do is download the program and install it. The IDE features are all fully-integrated, which means you won’t have to hunt anymore for plug-ins. Plus, all the features work simultaneously upon launching NetBeans.

  1. Vim



Vi Improved, popularly known as Vim, is an open source text editing app that supports command-line interface, and apps in a graphical user interface (GUI). Vim offers plugin support and system for numerous file formats and programming languages. It is quite speedy and works great on its own as the tool relies less on Cntrl/Alt sequences, enabling you to focus more on the modes. Vim also boasts of great keyboard macro facility for automating editing tasks. Developers might take a while to get the hang of this tool, but once they do, they will realize just how versatile Vim is.


  1. Bootstrap

Earlier known as Twitter Blueprint, the Bootstrap framework was developed by Twitter developers to improve reliability throughout open source toolkits. Thanks to Bootstrap, you can develop CSS, HTML, and JavaScript-based apps quickly and efficiently. The framework features a 12-column grid system and a responsive layout for dynamically adjusting the site to a compatible screen resolution. The results work consistently across different browsers and the output is uniform.  A lot of customization options are present, and if you encounter any issues, you can seek help from the extended support community.


  1. Cordova

This free Apache-sponsored open source tool can be used for developing mobile apps with JS, CSS, and HTML. Cordova wraps the application into a native container so it can access system functions across different platforms. The best part is, moderately-skilled web developers don’t need to learn any new programming languages. Prototyping can also be done fairly quickly. Apart from the various library options, you can create vector graphics to design specifications.



Any system administrator worth his salt will know that certain tools are important for the job as well as peace of mind since they make him more agile and efficient. The more you become familiar with those tools, the more you can improve upon your OS’ default set of tools and perform various functions.

Are System Admins Obsolete as everyone is moving to Serverless Infra?

With everything go to the cloud and serverless infrastructure are sysadmin occupation becoming obsolete? what can sysadmins do to stay relevant in IT?

System administration roles are diversifying as system engineers, application engineers, deveops engineer, devops engineer, Virtualization engineer , release engineer, cloud engineer  etc. Because of scale in cloud computing and additional layer of Virtualization, the infrastructure engineering is managed as a code by using automation tools such as Chef and Puppet. The rise of  computing and analytics have given tremendous elasticity and stress to the back-end infrastructure by deploying distributed computing frameworks such as Hadoop, Splunk etc. Applications are scaling horizontally and vertically across the data centers. The emergence of cloud has shifted the traditional role of system admin to the cloud engineer but infrastructure design and basic system services such as mail server, DNS, DHCP remains intact.  


  • Learn Linux

If you want to make your career as a Linux system administrator then you need to learn the basics of Linux along with the hands on practicals. I would recommend you to go for Redhat certified System Administration full course. The videos are available on Youtube and torrent as well. RHCSA is an entry-level certification that focuses on actual competencies at system administration, including installation and configuration of a Red Hat Enterprise Linux system and attach it to a live network running network services.

  • Get comfortable with scripting language & automation tools

Bash for everyday scripting, putting things in cron, use it to parse logs. Bash is not limited to it by itself, you want to learn a little sed and awk, and focus a lot on regular expressions. Regular expressions can be used in most languages.

After you have spent a few weeks/months learn python. After a few weeks with python you will easily see where it makes sense to use bash vs python.

Perl is a good general purpose language to use, if you deal with lot of files or platform independent sys admin automation, including solaris & AIX. It’s a bit hard to learn but easy to use

Some of the important automation tools for system admin are

  1. WPKG – The automated software deployment, upgrade, and removal program that allows you to build dependency trees of applications. The tool runs in the background and it doesn’t need any user interaction. The WPKG tool can be used to automate Windows 8 deployment tasks, so it’s good to have in any toolbox.
  2. AutoHotkey– The open-source scripting language for Microsoft Windows that allows you to create mouse macros manually. One of the most advantageous features that this tool provides is the ability to create stand-alone, fully executable .exe files, from any script, and operates on other PCs.
  3. Puppet Open Source – I think every IT professional has heard about Puppet and how it has captured the market during the last couple of years. This tool allows you to automate your IT infrastructure from acquisition to provisioning and management stages. The advantages? Scalability and scope!
  • Stay up to date with the current generation of infrastructure standards & practices


  1. Analytical skills: From designing to evaluating the performance of the network and the systems
  2. People skills: A network and computer systems administrator interacts with people from all levels of the organization.
  3. Technical know-how: Administrators have to work with different kinds of computers and network equipment, so they should be familiar with how to run these
  4. Quick thinking An administrator must be very responsive and must be able to quickly come up with solutions to every problem that pops up.
  5. Ability to multi-task Administrators often deal with different kinds of problems on top of what they usually do.



It’ll be systems administration under a different title like “Cloud Engineer” and do things differently, probably using automation tools and infrastructure code management and deployment.

Coding, automation and scripting are all very important skills to have now and for the future.

Ultimately someone will need to admin the systems and deal with the operations of the tech stack. So, yes it has a future.  The type of company varies tremendously, any company could use a sysadmin.  It may be an unexciting job of maintaining a local file share and email server, or something challenging like keeping a thousand servers running.


Top 5 open source version control tools for system admins

As a system admin, the chances are you collaborate with multiple people across the company, therefore you will probably know the stress of constantly transferring files and version controlling the changes. Version control tools are a great way to enable collaboration, maintain versions, and track changes across the team.

Perhaps the greatest benefit of using version control tools is that you have the capacity to deal with an unlimited number of people, working on the same code base, without having to make sure that files are delivered back and forth. Below are some of the most popular and most preferred open-source version control systems and tools available for making your setup easier.

1. CVS

CVS may very well be where version control systems started. Released initially in 1986, Google still hosts the original Usenet post that announced CVS. CVS is basically the standard here, and is used just about everywhere – however the base for codes is not as feature rich as other solutions such as SVN.
One good thing about CVS is that it is not too difficult to learn. It comes with a simple system that ensures revisions and files are kept updated. Given the other options, CVS may be regarded as an older form of technology, as it has been around for some time, it is still incredibly useful for system admins who want to backup and share files.

2. SVN

SVN, or Subversion as it is sometimes called, is generally the version control system that has the widest adoption. Most forms of open-source projects will use Subversion because many other large products such as Ruby, Python Apache, and more use it too. Google Code even uses SVN as a way of exclusively distributing code.
Because it is so popular, many different clients for Subversion are available. If you use Windows, then Tortoise SVN may be a great browser for editing, viewing and modifying Subversion code bases. If you’re using a MAC, however, then Versions could be your ideal client.

3. GIT

Git is considered to be a newer, and faster emerging star when it comes to version control systems. First developed by the creator of Linux kernel, Linus Torvalds, Git has begun to take the community for web development and system administration by storm, offering a largely different form of control. Here, there is no singular centralized code base that the code can be pulled from, and different branches are responsible for hosting different areas of the code. Other version control systems, such as CVS and SVN, use a centralized control, so that only one master copy of software is used.
As a fast and efficient system, many system administrators and open-source projects use Git to power their repositories. However it is worth noting that Git is not as easy to learn as SVN or CVS is, which means that beginners may need to steer clear if they’re not willing to invest time to learn the tool.

4. Mercurial

This is yet another form of version control system, similar to Git. It was designed initially as a source for larger development programs, often outside of the scope of most system admins, independent web developers and designers. However, this doesn’t mean that smaller teams and individuals can’t use it. Mercurial is a very fast and efficient application. The creators designed the software with performance as the core feature.
Aside from being very scalable, and incredibly fast, Mercurial is a far simpler system to use than things such as Git, which one of the reasons why certain system admins and developers use it. There aren’t quite many things to learn, and the functions are less complicated, and more comparable to other CVS systems. Mercurial also comes alongside a web-interface and various extensive documentation that can help you to understand it better.

5. Bazaar

Similar to Git and Mercurial, Bazaar is distributed version control system, which also provides a great, friendly user experience. Bazaar is unique that it can be deployed either with a central code base or as a distributed code base. It is the most versatile version control system that supports various different forms of workflow, from centralized to decentralized, and with a number of different variations acknowledged throughout. . One of the greatest features of Bazaar is that you can access a very detailed level of control in its setup. Bazaar can be used to fit in with almost any scenario and this is incredibly useful for most projects and admins because it is so easy to adapt and deal with. It can also be easily embedded into projects that already exist. At the same time, Bazaar boasts a large community that helps with the maintenance of third-party tools and plugins.

Author: Rahul Sharma

Create your own Virtual Private Network for SSH with Putty


I have multiple Linux machines at my home. Previously, when I needed SSH access to these machines I used to setup port forwarding on my router to each of these machines. It was a tedious process of enabling port forwarding and then disabling it after use. It was also difficult to remember port number forwarded for a particular machine. But now I found a cooler way to get SSH access to all my machines at home without setting up port forwarding or remembering any port numbers and most importantly, I can address my home machines with local subnet IP address, no matter wherever I connect from the internet.


  1. Remote machine with Putty installed in it.
  2. Home router’s internet accessible IP address or dynamic DNS (DDNS) address.
  3. One/more Linux/Windows machine(s) to which direct SSH access is required.
  4. On the router, port forwarding is enabled for SSH service to at least one of these machines.


The basic idea to get this working is that we make one initial SSH connection to our home machine. Then using this connection as a tunnel we can connect to any machines at home by addressing them with local sub-network address (such as 192.168.x.x). So the high level steps are:

  1. Open a putty session and configure it to act as a tunnel.
  2. From this session connect to your default SSH server at home.
  3. Open another putty session and configure it use the previous putty session as proxy.
  4. SSH connect to any machine at home using the local subnet IP address. Since we are using a proxy it will resolve the local subnet’s IP address properly.
  5. You can make any number of connections to all your home machines by just repeating steps (3) and (4).
    Note: If on the remote network’s subnet is same as your home network’s subnet then you might run into IP conflicts.

SSH VPN with Putty


1) On the remote system, open putty enter the IP address or dynamic DNS (DDNS) name in the host name field. Select “SSH” as connection type. Port 22 will be selected which can be left alone unless you run the SSH service on a different port. Note: Though your putty screen might look a little different than the one seen here due to version differences, the basic steps would be still the same

In our example,
Host Name = demo123.dyndns.org
Port= 22

Remote home system network details

2) In putty, on the left-hand navigation panel, open SSH option and select “Tunnels”.

In the tunnels screen, set these values
Source Port: 3000 (this is the port at which our proxy service listens to, this port can be changed to any but preferably a number larger than 1024)
Destination Port: (Leave Blank)
Finally, select “Dynamic” from the radio button options.

Tunnelling information for the proxy

3) Important: Click “Add” to add the tunnel settings to the connection.

Tunnel settings added

4) On left-hand navigation panel, move the scrollbar to the top and click session. You will be seeing the settings entered in step(1). Now we can save the whole connection settings. Add a name for this connection in the saved sessions textbox and click save.

Saving the connection settings

5) Click open, to open connection to home machine, and enter login and password information for the remote machine. This user need not be root user, but it needs to be an user with network access on the remote machine. That brings to the end of putty configuration. Now you have a proxy tunnel connection from remote machine to one of the home machine. Now we are ready to connect to any home machine.

6) Open another putty session. Select the options “Proxy” from the navigation panel. On the right-side proxy options, enter only the following information. Don’t change any other settings.
Proxy type               : select “SOCKS 4”
Proxy hostname     : enter “localhost”
Port                            : 3000

Proxy Settings

7) Click on the “Session” option from the navigation panel. Enter a name under “Saved Sessions” text field. Don’t enter any information in the “Host Name” field. Now click “Save”. Now we have a template connection session using our proxy.

Proxy template

8 ) Now enter local subnet IP address of a machine at home and click open. The connection gets routed through the proxy tunnel and you will be connected to the home machine directly. Similarly you can connect to another home machine by opening putty and loading the template we created and just filling in the machine’s local subnet IP address.

Connect to home machine with local IP address

Microsoft Word Productivity Hacks Every IT Manager Needs to Know

A Forrester report suggested that more than 90% businesses offer Office to their employees. This stat alone captures the essence of the kind of stronghold that MS Office has in terms of enterprise document management and productivity software. Among all Office sub-products, MS Word and MS Excel, without a doubt, are applications that most office employees use at least once a day. These applications have become the mainstays of how ‘text’ and ‘table’ formats of data are essentially interacted with, by end users. MS Word, specifically, is a pillar of office productivity.


Here’s Something Interesting about MS Word

So, almost everyone who’s anybody thinks he/she knows MS Word. Maybe you are, but maybe you’re not. That’s because Microsoft keeps on adding more features to its Word application, and not many users realize how much value these lesser known features can add. Microsoft has recently acquired a startup called Intentional Software, to ramp up its abilities around automation and simplifying programming for collaborative Office 365 products. Emails, reports, proposals, and letters – you name it, and there’s MS Word involved. It’s surprising how even the busiest and smartest IT managers don’t do the effort of understanding lesser known Word features to get the most out of the software.

With this guide, there’s no looking back; here are some super cool productivity hacks for MS Word.


Extracting All Images from an MS Word Document

Stuck with a product manual with 100+ screenshots, and tasked with creating a new guide, re-using the old pictures? How do you copy and paste so many images separately without losing a lot of valuable time? Here’s a trick.

  • Use the Save As the option to save the Word document.
  • Select Web Page as the target format.
  • Once you save it, Word creates a .html file, along with a folder that contains all the embedded images.
  • Now, all you need is to go to this folder, and you have your images waiting for you.


Copying Multiple Sections from a Long Document

For IT manager who needs to go through long reports, and is tasked with creating executive summaries, MS Word’s Clipboard feature is a godsend.

Using this feature, you can quickly review the last 24 selections of text and images you copied from the document! All you need to do is to go to the Home tab, look for the Clipboard button, and click on it.

This saves you vital time as you can visit the Clipboard anytime to take a quick look at whatever you selected and copied. This, for obvious reasons, proves invaluable particularly when you are trying to mark important content sections, to review them or collate them later.


Real-Time Co-Authoring

For IT managers hard pressed for deadlines, and those working closely with other managers and executives to prepare proposals and review documents, co-authoring is a tremendous productivity hack. This is the equivalent of co-authors sitting next to each other and working on the same content.

Co-authoring enables users to see everyone’s changes as they happen in the document, facilitating super quick feedback. MS Office support guides explain co-authoring as a 3-step process.

  • Save your document to SharePoint Online, or OneDrive.
  • Send out invites to people to edit the document along with you.
  • When this shared document is opened, each invitee will see the work that was done by others (supported by MS Word 2016, Word Android, and Word Online).


View Documents Side By Side

Pressing ALT + Tab to switch between two simultaneously opened Word documents can be disorienting. It certainly isn’t the best way to compare documents. If you don’t want to use the ‘Compare’ feature in the ‘Review’ tab of MS Word, and only want to go through two documents side by side, there’s an option.

  • Open the two documents you wish to view side by side.
  • Go to the View tab.
  • In the ‘Windows’ section, click on ‘View Side by Side’ option
  • If you also want the two documents to scroll simultaneously, you can click on the ‘Synchronous Scrolling’ button.


Pin Files to ‘Recently Used’

Ask an IT manager who needs to prepare daily, weekly, and monthly reports, as to what a mess it can be to maintain basic templates, which you can edit and repurpose into newer reports. No more ‘search’ hassles, because you can keep your trusted and ready-reckoner files pinned to the ‘recently used’ tab.

Here’s how you can add a document here:

  • Go to the File tab
  • Click on Open to see a list of files that have been only recently used
  • Click on the ‘Pin this item to the list’ button


Make Your Documents Easier to Read

If your supervisor or boss keeps on requesting re-work on documents because of ‘readability’ issues, you know how much time can be lost in attending to the ambiguous feedback. No need to put up with any of it any longer, because MS Word brings you two globally trusted readability tests, built right into the tool. These tests are:

  • Flesch Reading Ease test
  • Flesch-Kincaid Grade Level test

Here’s how you can use this feature:

  • Go to File, and click on Options.
  • Go to Mail > Compose Messages > Spelling and AutoCorrect
  • Select Proofing.
  • Look for an option called ‘When correcting spelling in Outlook’; under this, check the ‘Check grammar with spelling’ box.
  • Check the ‘Show readability statistics’ box.


More Productivity Hacks

Apart from all the nifty tricks we covered above, there’s a lot more you can do with MS Word. For instance:

  • Transferring ODT files with Microsoft Word Online and Google Docs (useful while you’re working with startups and vendors that use open source document management software, saving files in ODT format)
  • Keyboard shortcuts, such as Ctrl + Alt + V, to access formatting options while pasting content from one section to another.
  • Press ALT, see how shortcut indications pop up at the top of the menu bar, telling you which key you can press to access the associated action quickly.


Concluding Remarks

Chances are that it will still take time before you get your personalized AI-powered robot assistant to take away your Office applications work. Till then, trust the kind of productivity hacks as presented in this guide to make work quicker, better, and more fun.



Author: Rahul Sharma

Top End User Computing Trends that IT Leaders Need to Sync up within 2018


The world of end computing continues to evolve at breakneck speed. Enterprise employees want the flexibility to work anytime, anywhere, using any device, any web browser, and experience the kind of UX they like. Enterprises have every reason to work towards delivering this experience, because it boosts productivity, enables workplace mobility, and enhances the work experience of employees. In a world where innovations from personal space are seeping into business workplaces, it’s imperative for IT leaders to be on the top of the trends in end-user computing. We’ve covered some of the most important ones in this guide.

Contextualized Security

End-user computing security hardly requires any underscoring as one of the most important trends that IT leaders need to track and stay on the top of. In 2018, however, it’s likely that enterprises will start making the move to implement contextual end user computing security mechanisms.

Contextual security takes several user-specific variables into account to determine the right security-related action. These variables include:

  • The roles and privileges assigned to the user
  • The roles and privileges assigned to the parent group the user is a part of
  • The most commonly used transactions, applications, and processes for a user
  • The average session length
  • The general set of locations from which secured application access is made by the user
  • The browser, IP, and device the user uses to access enterprise data

These technologies are hence able to create unique user profiles, match user behavior to the profile, and based on any deviations, can initiate tasks such as:

  • Blocking access
  • Reporting potentially malicious activity to the IT security team
  • Alter the user profile to allow the deviation, following a proper mechanism of authorizations and approvals.

The Role of AI, ML, and DA in Enhancing End-User Security

Masters of the end user computing market are focusing on enhancing the state of existing security technologies by leveraging the prowess of data analytics (DA), artificial intelligence (AI), and machine learning (ML). Organizations are already spending a lot on AI technologies, and hence many of them already have a strong base to build their futuristic end user computing security towers on. IT now has more sophisticated, data backed, and pattern dependent methods of detecting intrusion. Security technologies in 2018 will start offering inbuilt analytics and machine learning capabilities to transform the end user computing world for the better.

Managing Complexity of Device Diversity

Gone are the days when the average enterprise had merely desktops, laptops, and VoIP phones on employee desks. Today, the range of devices used by employees in a tech-powered enterprise is expansive, to say the least. There are satellite phones, handheld devices to record information, barcode scanners, tablets, smartphones, smart speakers, and whatnot. And, we’re at the edge of the transition to Industrial Revolution 4.0 powered by IoT. To make the ecosystem more complex, there are too many operating systems, web browsers, and communication protocols in play.

Managing this complexity has been a challenge for enterprises for some time now. It’s just that in 2018, they will see the giants in the end user computing market release products that help them out. Today, enterprises want that their employees should have access to their personalized desktops on whichever computer they use for their work, anywhere. These are virtual desktops, and already being used aplenty by enterprises across markets. In 2018, the leading vendors will look to make their VDI services available across device types, and address operating system variations.

Diminishing Lines Between Commercial and Business Apps

Dimension Data’s End User Computing Insights Report in 2016 highlighted how several enterprise rates their state of business apps maturity lowest among six areas. This stat truly captured the need for businesses to start focusing on delivering superior app experience to end users. Because these end users are accustomed to using terrific apps for their routine life management (productivity, transportation, note taking, bookings, communication, data management, etc.), their expectations from equivalent business apps are as expansive. This makes it important for IT leaders to keep a stern eye on the progress of business apps at their workplaces. An important trend in this space is the use of personalized app stores for user groups, as made possible by Microsoft Windows 10 app store.

Increased Adoption of Desktop as a Service

DaaS must be an important component of the virtualization strategy of any enterprises. Now, traditionally it’s been seen that services such as Amazon WorkSpaces have been restricted to only being viable for disaster recovery and business continuity planning. However, it’s going to be exciting to watch for developments in this space throughout 2018.

  • Vendors are likely to release updates that will help address challenges around desktop image management and application delivery.
  • DaaS applications that can act as extensions to your data centre will appear on the surface, driving adoption.
  • Integrated services such as Active Directory, application manager, and Microsoft System Center will also help further the adoption of DaaS.
  • New services, functionalities, and configurations will be kept on adding to DaaS solutions.

A useful option for enterprises will be to seek the services of consultants with experience in VDI strategy.

Video Content Management

Video has become a crucial content format for enterprises. The importance of video content is clear from the kind of estimates tech giants are making; for instance, Cisco estimates that 80% web traffic will be made up of video content access, by the end of 2020.

Enterprise end users require video content for DIY guidance, or to share the same with their end customers. Also, field employees need tech support via video content to be able to quickly manage their technical issues. Video Content Management Systems, hence, will become important for the enterprise from an end user experience point of view.

Concluding Remarks

Throughout 2018, expect to witness pressing changes in the end user computing market. Primarily, vendors will push to increase adoption of advanced, feature rich, and power packed end-user computing solutions.



Author – Rahul Sharma