Hot of the press! Softrams has earned the 2022 Top Workplaces Culture Excellence recognition for two culture excellence awards in Leadership and Innovation. This recognition was issued by Energage, which is a renowned research company with sixteen years of experience that has surveyed over 27 million employees across 70,000 organizations.

Top Workplaces Culture Excellence awards celebrate organizations that excel in specific areas of workplace culture and are based solely on employee feedback. These awards are collected through a research-backed, 24-item employee engagement survey. This is Softrams second consecutive win, as Softrams was also the recipient of the 2021 award for Remote Workplace in Culture and for Technology in Industry.

“Culture is what we consistently do,” said Murali Mallina, Softrams CTO.  It reflects the contribution of each person at Softrams that has created and nourished our culture. Congratulations to everyone and thank you for all the amazing work you do and the environment you create for everybody in the team to grow, create products, and deliver services that impact millions of people every day.”

Culture Excellence Award Descriptions

Leadership
The Leadership Top Workplaces award celebrate organizations whose leaders inspire confidence in their employees and in the direction of the company. These leaders understand the needs of customers which front-line employees hear every day.

Innovation
The Innovation Top Workplaces award celebrate organizations who have embedded innovation into their culture and create an environment where new ideas come from all employees.

“Top Workplaces is a beacon of light for organizations as well as a sign of resiliency and impressive performance,” said Eric Rubino, Energage CEO. “When you give your employees a voice, you come together to navigate challenges and shape your path forward. Top Workplaces draw on real-time insights into what works best for their organization, so they can make informed decisions that have a positive impact on their people and their business.”

Congratulations to Softrams on this employee led achievement. It is an accolade won by the employees and a tremendous endorsement regarding the culture they themselves have shaped.

 

 

 

 

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD

Just how secure is our data?

Most companies have complex and efficient security protocols to defend against potential external cyberattacks but the largest threat maybe from within the company. Most security breaches reported nowadays are due to human touchpoint error. The recent cash app data breach is also an example to consider, with an estimated 8 million users whose data has been breached due to an internal employee leaking sensitive information. While there will certainly be legal action taken against the employee but ultimately the question arises at what cost? The data of those 8 million people is already out in the open, which may include personal and confidential details.

The leaked data ends up on the dark web available to the highest bidder. This data can then be used in other ways to gain more information. In this particular case, the data leaked was about traders, and other trading apps may use this data to undermine the present business. Since trading data is leaked, other companies can gain access to customers into their own ecosystems; illustrating an example of “Data is the new weapon”.

 Understanding the Dark Web allows companies to help its customers, clients, and safeguard its own resources.

Tracking Threats: Most underground attacks are planned in forums of these dark web sites. These notable attacks can be planned on an organization, persons of interest or VIPs. To understand these threats, we need to understand the market, which includes forums, reputations of notable hackers, the market of data being collected, etc. By continuously tracking and monitoring these threats, they can be mitigated beforehand and we can undertake necessary precautions.

Passwords/Credentials: This is the most common of all, when a data breach contains passwords it increases the value of data being sold threefold, so this is the most targeted type of data. Once an unauthorized person gains access to these credentials, they can emulate users and bypass security systems. By continuous monitoring of any potential data breach, companies can identify the threat beforehand and take quick action by notifying the affected users to reset their passwords.

Cyber Fraud: This is the case where payments details like credit/debit cards, phishing information, and counterfeit goods are sold, By monitoring these types of data companies,  companies can act fast and mitigate the threats. Notable products sold include phishing kits in which you can target unsuspecting users, Another fraud in recent times is Gift Card fraud where these gift cards are traded as a form of payment across criminal forums, dark web markets, and dark web sites.

 

These data breaches may happen due to the weakest link in a company which is why cybersecurity awareness for all employees is critical. In essence, employees are the last line of defense for any company. While technology can eliminate most attacks, it cannot completely eliminate threats. This is why it is everyone’s responsibility to remain updated on the latest cybersecurity measures placed by your company which may also help save your personal data! Go over the following questions and think about the security in your own home:

How secure is your internet connection while working with personal data?

Are your smart home devices secure enough to store your WiFi credentials?

Do you own an Echo/Google Smart Home Assistant? Are you certain they are not always listening? Check if your device has a mute button – do you always press it to discuss sensitive data?

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD

We’ve been using Argo CD to deploy to our Kubernetes clusters for a while now. The overall experience has been great, but as we get more advanced we have started to wish for additional options. One issue we’ve had is the value files for helm charts have to exist in the same repo as the helm chart.

This is fine when we are deploying a standalone helm chart we control, but it is not so easy when using existing helm charts or our own helm charts that are meant to be deployed to all clusters we spin up. So we end up supplying values as overrides on the Application manifest. There are some concerns with this approach, like wanting to separate permissions to update Applications from permissions to update the values used.

There are a few different ways people have gone about addressing this. Some use proxy charts that just consist of value files and a dependency on the regular chart. Some have created plugins for Argo CD that handle pulling in separate value files. And there is also a proposal being worked on to implement a feature in Argo CD to support external value files. While we were enabling our platform to support Argo CD managing multiple clusters we realized we could use the same ApplicationSets we were using to deploy to all clusters to also grab helm values from separate repos.

ApplicationSets have multiple different generators you can use to generate the Applications to be deployed. The Git Generator: Files is what gives us the ability to pass values from a separate repo. It will look in a git repo for files that define the clusters and we can include overrides there. It effectively does what we were already doing with the values as overrides, but allows us to store those values in a separate repo. Here is an example:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: example
spec:
  generators:
  - git:
      repoURL: https://github.com/example.git
      revision: HEAD
      files:
      - path: "examples/**.yaml"
  template:
    metadata:
      name: '{{name}}-example'
    spec:
      project: default
      source:
        helm:
          valueFiles:
          - values.yaml
          values: |-
            {{values}}
        repoURL: https://github.com/bitnami/charts.git
        targetRevision: HEAD
        path: "bitnami/nginx"
      destination:
        server: '{{address}}'
        namespace: default

In the example repo you would have a yaml file (ie in-cluster.yaml) that defines your cluster and the overrides:

name: "in-cluster"
address: "https://kubernetes.default.svc"
values: |
  replicaCount: 2
  resources:
    limits:
      cpu: 100m
      memory: 128Mi

This will generate an Application with the values specified and targeting the cluster specified in the in-cluster.yaml file.

If we are just using this to pass values to the cluster Argo CD is running on, and don’t need to provide the ability to deploy to other clusters, we can simplify:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: example
spec:
  generators:
  - git:
      repoURL: https://github.com/example.git
      revision: HEAD
      files:
      - path: "examples/in-cluster.yaml"
  template:
    metadata:
      name: 'example'
    spec:
      project: default
      source:
        helm:
          valueFiles:
          - values.yaml
          values: |-
            {{values}}
        repoURL: https://github.com/bitnami/charts.git
        targetRevision: HEAD
        path: "bitnami/nginx"
      destination:
        server: "https://kubernetes.default.svc"
        namespace: default

And then we don’t need to include the name or address in the in-cluster.yaml file. That way the files will only contain the values we are trying to override and we can separate roles and responsibilities.

values: |
  replicaCount: 2
  resources:
    limits:
      cpu: 100m
      memory: 128Mi

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD

Companies are becoming more aware of the importance of cyber security practices as they are requiring users to use multi-factor authentication methods in order to better protect user information. Login requests can be directed to applications, such as Duo Two-Factor Authentication and Google Authenticator, as ways to verify that only privileged users can access the system. Softrams is a great example of an enterprise that implements multi-factor authentication applications such as Okta and Google Authenticator, throughout its systems. Softrams has always used multi-factor authentication and continues to analyze log data for potential abuses.

Other organizations are becoming more aware of this. Just recently, the Cybersecurity and Infrastructure Security Agency (CISA) announced that it added single-factor authentication, which only uses one username and password to log in, to its outline of bad cyber security practices. It is important to note that the use of only single-factor authentication for passwords is not a reliable system and everyone should engage in a multi-factor approach to better protect their data. According to the article, single-factor authentication is highly vulnerable to “brute force, phishing, social engineering, keylogging, network sniffing, malware, and credential dumping” attacks, making it inadequate for securing computer systems (Hope). Additionally, reusing passwords, and creating short, weak passwords, in general, contribute to the ineffectiveness of single sign-on authentication. In order to increase security, it is best to include another level of authentication which has been proven to “block 100% of automated attacks, 99% of bulk phishing attacks, and 66% of targeted attacks on Google accounts” (Hope). While logging in multiple times can be irritating, implementing this one little step in your daily logins, can go a long way in ensuring the security of your digital assets.

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD

Tiago Forte coined the term Second Brain, and the main idea is that your brain is to have ideas and not store them. Small things such as someone’s phone number or the name of the two new colleagues you meet at work are easy enough to remember. However, it becomes tedious to juggle when you are learning more about Kubernetes while also reading about Node JS., all while managing the large amounts of articles and documents that need to be read to complete daily tasks. A second brain can be used to help reduce the amount of information juggling between functions and learning engagements using our brain.

So how does it work?

The process is simple enough and is similar to taking regular old-fashioned notes. The first step is to jot down any important pieces of information that resonate with you. Such as the mitochondria being the cell’s powerhouse or that ketchup is simply a form of tomato juice, and so on. This could include facts, quotes, summaries of just about anything, and as your collection of notes increases, the knowledge of your second brain is growing and expanding as well. But the idea comes to life when you implement the second step, which is to create a system to organize all those notes. By going through and analyzing all the notes that have been collected it forces the person to think critically about the information they have taken and make sure any extra or unnecessary information is taken out.

There is no natural formula or specific rules to organize the information. It is essentially to go through all the notes and to create a system of categorizing them so that in the future when you need to refer to them for an assignment, project, or to view the progress you have made so far, it is laid out clearly for you to see and understand.

Others have also thought of similar ideas and concepts over the years. One that is close to the second brain is Zettlekasten or the slip-box method. Sociologist Niklas Luhmanns is well-known for using this method and had over 90,000+ cards within his slip-box. However, Zettlekasten is more detailed because it focuses on capturing specific ideas from each source, which are then written in your own words and linked to other notes with related concepts. The second brain does not go that deep and is more generalized. It helps capture all the information in one area, and as you are using it, you know exactly where to look and to refer to because you have organized it to how your brain processes information.

Therefore, if in the future any new project, either for work or personally, is presented to you, then you have a second brain full of ideas and information in one area to reference rather than researching everything again. Another benefit of organizing your notes is that you can start connecting seemingly different ideas and concepts to develop new ones. Here you are using your brain to do what it is most comfortable doing, which is to have and create new ideas.

How do we build our second brain?

Now that you are all excited and super pumped to create your second brain there are two main ways to do this. First is digitally through online note-taking applications. Popular and free options include Notion, Obsidian, Evernote, or OneNote applications. They allow the user to structure their notes as they would like to while also being available across many devices, so your notes are readily available whenever inspiration strikes. Secondly, there is the traditional method through pen and paper where you can physically touch and re-arrange the information as more ideas are added. Both ways can be used either individually or combined to build a powerful source of information for yourself!

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD

I remember being at an investor dinner when I worked for an up and coming tech startup. This was after our official presentation and we were kicking back and enjoying some casual conversation. One of the investors asked our CTO what the hardest technical challenge he found while working on software was. Without skipping a beat, our CTO responded “time zones”. 

Now, lest you think less of our CTO, he was a brilliant person who had built some very complex systems ground up, and at the time we had built a system that depended heavily on having to-the-second accurate information. He knew what was hard and what wasn’t.  

If you work on an application that deals with dates and times across time zones without a deliberate strategy behind it, you’re likely showing incorrect information.  

Time Zones 

Let’s cut to the chase.  

In a standard web application with persistence (a database) you will likely have an architecture simplified to this: 


Your web browser connects to a server somewhere and retrieves information from a database. Part of this information may include a date and time, such as 2022-03-22 12:00:00. The question is, what time does this represent?  

It’s impossible to know without knowing what the time zone of the database is assumed to be. Without explicitly storing the time zone with the datetime stamp, it’s meaningless. All database servers have an assumed time zone, though, and this is where our puzzle begins.  

Most of the time, your database server will be set to UTC, or Coordinated Universal Time. This is the primary standard time of the world, and all other time zones are described as being “offset” to this time. For instance, Eastern time in the United States can either be 4 or 5 hours behind UTC, depending on daylight savings time. You would represent this as UTC-4 or UTC-5. 

Now, in my case, the database server is set to represent Eastern time, which gives me a reference by which I, or any consuming application, can understand the dates and times retrieved from a database.  

Node to the Rescue? 

Node is a very popular runtime framework for servers these days, and when you work with MySql using the popular Node MySql adapter MySqlJS, it will convert a datetime stamp it finds to a native JavaScript Date object, unless you tell it not to. When it does this, it assumes a time zone of the incoming timestamp, and if it’s not given one it will use the time zone of the current runtime.  

In Steps AWS Lambda 

AWS Lambda runs in UTC which means that if you construct a new JavaScript Date object in Node, it assumes the time zone to be UTC. Let’s take our datetime stamp above and see what this would do: 

Database datetime stamp: 2022-03-22 12:00:00

If I took that date timestamp in an AWS Lambda function running Node and created a new Date from it like so: 

new Date(‘2022-03-22 12:00:00’);

You’ll end up with a datetime stamp in JavaScript like this: 

2022-03-22T12:00:00Z

Looks great right? 

Here’s the problem, that Z at the end of the string means that this date and time is in UTC, which actually means that in Eastern time (my original time zone), my date time is this: 

2022-03-22 07:00:00

5 hours off. Ain’t it wacky? This is because MySQL knows the dates it holds are in Eastern, but we aren’t telling Node that, so when we query the database and Node becomes helpful and converts those times to a native Date within JavaScript, we’re distorting our data.  

There are lots of ways to handle this, but the key point is that it must be handled.  

The Solution 

In our situation, the most common scenario is this: 

In order to make sure the Client receives the right data, the server and database need to be talking the same time zone, or at least understand where the other stands. 

With the MySqlJS library above, this is pretty simple. When I establish a connection, I can give it the time zone to work in: 

Database Connection Example

That last property is the most important, because it will tell Node to use an offset of `-5` when establishing the connection.  

When you do this and you rerun our example from above you get the magical, correct, value: 

Database datetime stamp: 2022-03-22 12:00:00

new Date(‘2022-03-22 12:00:00’);

Annnd our datetime stamp in JavaScript: 

2022-03-22T17:00:00Z

This may be confusing because this is a UTC timestamp, but that’s what we want. 

2022-03-22T17:00:00Z ==2022-03-22 12:00:00 EST

Summary 

TLDR; 

If you do date comparisons or casting on a server, make sure that you know the time zone of the server as well as the origin of the data.  

In addition: 

  1. Be specific about the time zones you’re using when passing around dates. 
  1. Understand the time zone of the runtimes you’re using. 
  1. Pass your dates and times around in consistent formats. 
     

Happy coding. 

 

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD

If you have worked in the software industry, chances are you’ve encountered a program that needs to migrate an application or feature. Often the term “migration” brings a sense of fear and foreboding, but it doesn’t have to. At Softrams, we have often taken ownership of applications that use legacy technologies and require some modernization to be sustainable for future use. We’re going to share some insights we’ve gained in doing this, as well as outlining some best practices that will help your organization if they find themselves needing to perform a migration themselves.  

What’s a migration? 

A standard software lifecycle is to define a product, build it, launch it, and then maintain it. Occasionally during the lifecycle of a software application it is found that the original way something was built isn’t adequate for the entire life of the application. There may be new requirements that the first build of the application aren’t well-positioned to support, or perhaps one of the platforms the application was built on will no longer be supported. In these cases, you may have to take your existing application and “migrate” the functionality in some way. This could mean: 

  • Your application is written in a particular language, and you need to move it to a new language  or runtime (perhaps ASP.NET to Node). 
  • The application needs to be moved from one platform to another (hosting using Heroku to AWS, for example). 
  • You may want to split your application from a monolith to a microservice approach, or the opposite.  

Each of these scenarios will require a very different approach and bring their own challenges. For instance, changing languages depends greatly on 

  • Developer knowledge of both languages. If developers are weak in one or both languages, you may consider training to ensure that the migrated codebase is written with best practices. 
  • Developer knowledge of the business logic of the application. If developers do not understand the existing codebase, it is likely that subtle bugs will be introduced during the migration due to a misunderstanding. 

And each of the examples above will present their own gotchas.  

How do you approach a migration? 

When the challenge of migrating your code base is presented, it’s essential for the team to approach the migration intentionally. To have a high chance of a successful migration, ensure you: 

  • Understand why this migration is necessary 
  • Identify the specific goal of this migration 
  • Document all requirements necessary for the migration to occur 
  • Identify and communicate the risk associated with performing this migration

Let’s break each of those down a little further and understand why they’re crucial to a successful migration. 

Understanding why 

There are very few directives for a development team less motivating than being told to do a bunch of critical, risky work without adequate reason. There can be many real and important benefits to performing a migration, and any initiative needs to start with why. If the team isn’t motivated to achieve real benefits with a migration, you will find it takes longer and carries more risk. 

Identify the goal of this migration 

Though understanding “why” a migration is occurring is helpful, it should be taken one step further to also ensure that each migration has criteria for success. This could be as simple as specifying that “my application needs to work on AWS instead of Heroku” or it can be a specific metric such as “bringing the application launch time from 20 minutes to < 1 minute”. Either way, identify criteria for success and ensure the team understands how they succeed.  

Document all requirements  

The complexity of this documentation will depend on the type of migration. If you’re rewriting a codebase from one language to another, you would want to highlight any language specific considerations that may impact delivering comparable functionality. If you’re migrating platforms, you’ll want to ensure you understand exactly how you want your infrastructure to behave with robust analysis. This type of documentation also gives you a nice checklist to work through as the migration occurs and can provide visibility on how the migration is progressing. 

Identify and communicate risk 

Any changes to a software application carry risk. Whether you’re making small changes to a layout, or changing core aspects of your system, be assured that change brings the possibility of error and the larger the change, the greater the risk of error. Before you decide to migrate any part of your application identify what the major risks are and agree on mitigation strategies. For instance, if you are migrating the hosting of your website, the last step may be to route all traffic to your newly hosted site and a solid failsafe is to keep your existing site up and running in parallel for a while so that you can revert to the existing instance in cases of failure.  

Specific Scenarios/Gotchas 

As mentioned above, different migrations can have their own challenges. Below, we outline some specific things to keep in mind for each. 

Lift + shift 

A “lift and shift” migration is one where you take the exact functionality and code conventions of an existing codebase written in a specific programming language and rewrite it in another programming language by changing as few things as possible. Sometimes this means that you can copy an entire codebase with only some simple syntax changes. 

When to consider this: 

  • Your timeline is tight and you have a small amount to migrate 
  • You think there will be time in the future to revisit and refactor 
  • Your existing application architecture and structure is solid, with sound conventions 

What to examine before pursuing: 

  • Does the new language support all existing conventions of the current language? 
  • Are system requirements documented well enough to inform developers of how to make decisions when a clear “shift” is not possible? 
  • Are there any poor coding practices present that may be in need of a refactor? If so, this may not be the correct approach. 

Refactor + shift 

In contract to a “lift and shift” where you avoid changing code as you migrate it to a new language, a “refactor and shift” approach will require a more deliberate plan and coordination. This approach takes an existing codebase and rewrites in another language or framework, maintaining existing functionality and core structure while improving the structure and consistency of a codebase.  

When to consider this: 

  • You have an adequate timeline to complete the migration 
  • You have solid engineering leadership 
  • Your system could use a little TLC on the maintainability scale 

Some risks: 

  • Make sure you define how far to take the “refactor”. Some developers hear the word “refactor” and don’t know when enough is enough.  
  • This approach requires more robust testing and quality control, since a larger portion of the application will be changed and unintended side effects may creep in. 

Full re-write 

Sometimes you have an application that was either poorly written or written with requirements that are no longer relevant. In this case, you take the desired functionality and purpose of the system and rewrite it from ground up. This option provides an opportunity to change frameworks or languages, if desired. 

When to consider this: 

  • You find yourself consistently unable to meet new requirements due to prior (limiting) architecture decisions.  
  • The purpose or requirements of the system have changed dramatically since its original launch. 
  • You company can accommodate a timeline that prioritizes refactoring to new features. 
  • You have well defined product requirements. Scope creep can affect re-writing software just as much as a new product, and you don’t want your migration to be affected by ill-defined requirements. 

Risks 

  • Having a working application brings stability and confidence that vanishes when you decide to launch fresh. The moment you decide to do a rewrite, you need to be prepared to treat the initiative as a new product launch. 
  • The only sure way to know how long it will take to develop software is to develop it and then measure the time it took. The larger the initiative, the larger the potential variance in timeline could take.  

Summary

Migrating a codebase, especially when it supports a live application, can be an intimidating task. There are times when it is the best course of action for the lifecycle of your application. In these cases make sure you set your program and team up for success by making good decisions and considering all your options.

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD

 

Definition:

A Cyber incident is an event that could jeopardize the confidentiality, integrity, or availability of digital information or information systems.

An incident can be defined as an unexpected disruption to a service. An incident can disrupt your business which will directly or indirectly impact your customers.

Examples of incidents includes the following:

Applications locks, Network services failures, Application crashes, Wi-Fi connectivity issues, file sharing difficulties, unauthorized changes to systems, data or software, Denial of service (DoS), compromised user account etc.

What is the most important thing to do if you suspect a security incident?

If you suspect a incident on a system that contains sensitive data do not attempt to do the investigation or remediation by yourself. You will need to instruct all users on the system to stop work. Remove that system from the office network by unplugging the cable or taking it out from the wireless network and follow the incident response reporting policy according to the existing IR plan.

The importance of reporting an incident:

Incident reporting can act as a heads up to management meaning it helps in raising awareness about the things that can go wrong if corrective and preventative actions are not taking immediately.  It gives management the entire ability to have more detailed information to support their proof whenever an incident occurs or reoccurs. It is good to report incidents as they can provide a reminder of possible hazards. When they are reported promptly then easier to monitor the potential problems and root cause as they can always repeat. Reporting helps to identify who, what, when and where during an attack. Reporting an incident as soon as possible can help contain, limit the adverse effect, reducing the cost to an organization both financial and reputation wise.

Incident Response is a system of people, process, and technology leveraged to prepare for, detect, contain, and recover from a suspected cyber security incident or compromise.

People:

  • Incident Responders
  • Security Operations Center (SOC) Analysts
  • Forensic Analysts
  • Threat Intelligence

Process:

  • Incident Response Plan
  • Runbook/Playbook

Technology:

  • Response(SIEM, Custom tooling)
  • Analysis (Analytical/Forensic tooling)
  • Detection (AV/EDR, Customer log services, SIEM)
  • Not all security incidents can be prevented so organizations must be prepared

Incident Response Lifecycle:

Incident Response Lifecycle is broken into four phases according to NIST, as follows:

Phase 1 – Preparation:

This covers all actions an organization takes to be ready for incident response and this involves putting together the right resources and tools.

Phase 2 – Detection and Analysis: 

To accurately detect and access incidents is difficult for some organizations according to NIST Publication.

Phase 3 – Containment, Eradication, and Recovery:

Advising on the measures necessary to contain the incident, limiting its spread and reducing impact to be as low as possible. Directing the available resources to manage your recovery activities, using the available resources to recover from the incident as quickly and effectively as possible to mitigate service disruptions.

Phase 4 – Post-Event Activity:

The most important part of the lifecycle is learning and improving after an incident to take the adequate time to analyze the efforts of the incident response. Reviewing your incident response procedures following the incident to highlight improvements and inform your planning for next time. Advising on communications both internally and externally, including to authorities, the media and suppliers.

Best Practice

This blog is based on a combination of the best practice cyber incident response framework developed by CREST  NIST SP 800-66rev2 and the international standard on incident management, ISO/IEC 27035.

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD

2nd annual list recognizes 147 private companies that put purpose before profit

Softrams has been named to the Inc. 2021 Best in Business list in the IT Development category. Inc.’s Best in Business Awards honor companies that have gone above and beyond to make a positive impact.

The list, which can be found in the Winter issue of Inc. magazine, recognizes small- and medium-size privately held American businesses that have had an outstanding influence on their communities, their industries, the environment, or society as a whole.

Scott Omelianuk, editor-in-chief of Inc., says, “What began for us during the pandemic as an effort to showcase companies that were helping the community has grown into a recognition of social, environmental, and economic impact. The companies on this year’s list are changemakers with heart – and they’re pouring the best of their business into the people and communities around them.”

Rather than relying on quantitative criteria tied to sales or funding, Inc.’s editors reviewed the companies’ achievements over the past year and noted how they made a positive difference in the world. They then selected honorees in more than 49 different industries – from finance to software to engineering to fashion, and more – and in age-based and revenue-based categories. The applicant pool was extremely competitive, with around 2,700 entries and an acceptance rate in the low single digits – a huge success for these honors in the list’s second year. Honorees for gold, silver, bronze, and general excellence across industries and categories are featured online at inc.com/best-in-business.

ABOUT INC. MEDIA

The world’s most trusted business-media brand, Inc. offers entrepreneurs the knowledge, tools, connections, and community to build great companies. Its award-winning multiplatform content reaches more than 50 million people each month across various channels, including websites, newsletters, social media, podcasts, and print. Its prestigious Inc. 5000 list, produced every year since 1982, analyzes company data to recognize the fastest-growing privately held businesses in the United States. The global recognition that comes with inclusion in the Inc. 5000 allows these founders a chance to engage with their peers in an exclusive community with the credibility to help drive sales and recruit talent. The associated Inc. 5000 Conference is part of a highly acclaimed portfolio of bespoke events produced by Inc. For more information, visit http://www.inc.com.

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD

In 2020, over 1,100 data breaches were reported in the country, affecting nearly 300 million individuals that year (Identity Theft Resource Center). This is especially concerning for private businesses as over half of these attacks affect them. With alarming statistics like this on the rise and the expansion of network infrastructure features, it is essential that enterprises continue to invest into better network security features. Designing and maintaining the network infrastructure of an organization is a vital part of security and is needed now more than ever before.

What is included in a typical network? Basic network infrastructure consists of routers, switches, servers, workstations, firewalls, hubs, operating systems, and more. All these components create an inter-connected communications channel for information to be sent across software applications and physical resources. Enterprises use a Local Area Network (LAN) to limit their network to only privileged users.

What are aspects of a secure network? There are multiple ways to safeguard the information and private communications within the network. Using Firewalls can help with filtering out and monitoring only approved incoming and outgoing communications. A recommended method is having two firewalls, separated by the DMZ. The DMZ stands for De-Militarized Zone, which is used in Cybersecurity as an extra layer of security between the public and private networks. It allows an organization to access untrusted networks while its LAN stays secure. Usually, external facing resources such as the web and email servers are located in this zone. A VPN is crucial as it hides the location of the user and ensures that only privileged users have access to the network.

Why is this important? Networks allow for communication across IT infrastructure and protecting such communications should be prioritized. We live in a world that is continuing to become more digital, meaning businesses are at a higher risk of actors exploiting potential vulnerabilities online. Taking proper network security protocols prevent interrupted business, data loss, fines and legal ramifications, and overall loss of business. In the long run, securing networks would save money and time for repairs as well as improves network operations and minimizes the risk of major data breaches (Consolidated Technologies, Inc).

 

Sources:

https://consoltech.com/blog/why-network-security-is-more-important-than-ever/

https://www.ibm.com/topics/infrastructure

https://www.fortinet.com/resources/cyberglossary/what-is-dmz

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD