Introducing Cribl Stream Projects: the self-service approach to Cribl Stream data

Introducing Cribl Stream Projects: the self-service approach to Cribl Stream data

In our latest blog, Chris Chantrey, Cribl Practice Lead for 4Data’s Cribl team, highlights the features and benefits of Cribl Stream Projects in making data available to different personas in a safe and reliable manner.

 

What is Cribl Stream Projects?

Cribl Stream continues to gain recognition as a valuable IT and Security team infrastructure platform. The increasing demand we’re seeing for Cribl Stream as an internal service is testament to its effectiveness in improving operations and enhancing security measures. With the rise of ITOps, SecOps, SRE, DevOps and other teams embracing Cribl Stream, Cribl has introduced a new feature – Cribl Stream Projects.

Cribl Stream Projects is a self-service model that allows a variety of users to securely access any observability data without requiring new agents or changes at the data sources. With Cribl Stream at the core of an enterprise’s observability architecture, administrators already have complete control over their observability data. Cribl Stream Projects adds to this control by enabling administrators to easily set up Projects based on department need, and shape the data in that Project to be optimised for a particular use, allowing new users to subscribe only to the data that is important to them.

Cribl Stream Projects reduces dependency on the administrator to onboard more users and tools, and reduces the time-to-value for the user. This enhances collaboration and provides deeper insights, resulting in a more personalised user experience. Cribl Stream Projects is the first product in the industry enabling organisations to allow teams to manage their own data without needing to understand the infrastructure or service being used to collect and route it. Think data democratisation in the truest sense!

What are the benefits of Stream Projects for Cribl administrators and Cribl users?

When combined with Cribl’s new authorisation support, Stream Projects benefits both Cribl administrators and users by addressing their individual needs. Cribl administrators aim to limit the scope and blast radius of users’ changes on other users to ensure teams work within a specific scope. On the other hand, Cribl users benefit from simplified views and workflows to work with data that caters to their needs and entitlements without affecting other users downstream.

Administrators create a Project and define which sources data is collected from and which destinations receive the processed data from Cribl Stream. The Project creates a defined scope for users to work within, minimising the risk of errors or unauthorised changes.

Cribl Stream Projects has three primary resources within it:

  • Data Projects: This Project serves as a dedicated space for data experts to work solely on the data they are interested in. That data has value as its fit to help them achieve their jobs. Stream admins can create secure projects with pre-determined inputs and outputs to minimise the need for data experts to understand the overall pipeline mechanics. Furthermore, it allows Cribl Stream to be embraced by new teams and departments, which reduces the need to manage other pipelines and processing tools.
  • Subscription: These are sub-streams of data obtained by applying filters and pre-processing pipelines or packs. For example, you can filter data for contractor information and have PII removed in the pre-processing pipeline before the subscription is sent to a project.
  • Role: This defines the authorised role with access permissions to a particular project. Users who are assigned this role will be able to access the project.

Cribl Stream Projects in practice

To better understand Stream Projects, let’s look a real-life scenario. In this scenario, the Cribl administrator wants to filter data from the same Firewall sources but split it between the Security and the Ops teams where each team shouldn’t have access to the other team’s data and processes. Let’s assume the Security team uses Splunk while the Operations team uses Elastic.

Without Stream Projects

  • The Cribl administrator restricts access to sources and pipelines, creates routes to filter out data and sends filtered data to custom pipelines created by the users/teams which will require back and forth due to dependencies.
  • The downside here is the teams can see each other’s data. And any changes to Route 1 can break Route 2 – making data inconsistent for both teams.

With Cribl Stream Projects:

  • The Cribl administrator creates Data Projects per team with their respective subscriptions.
  • Both teams have secure access to their own sets of data, and the changes don’t impact the other team.
  • Both teams have the ability to change the data as they need to, rather than having to go back and forth with the Cribl admin to get a custom pipeline created.
  • With Cribl’s new authorisation feature, team members perform their specific roles effectively while maintaining the appropriate level of access.

So, as we’ve seen, Cribl Stream Projects diversifies who can use Cribl Stream, and how. It creates isolated spaces for teams and users to share and manage their data. This self-service approach to Cribl Stream data can benefit its immediate users by offering them accelerated access to relevant data, with minimal configuration requirements. It can also benefit their peers. 

4Data Solutions is an expert in implementing Cribl Stream. If you are considering implementing an observability pipeline into your cloud migration strategy, talk to us to today.

Call us on +44 330 128 9180 or email info@4datasolutions.com.

How an observability pipeline can help with cloud migration

How an observability pipeline can help with cloud migration

In this blog, we will explore how an observability pipeline can help with cloud migration and outline the ways in which Cribl Stream stands out from other observability platforms.


Would you like to confidently move workloads to the cloud without dropping or losing data? Of course, everyone does. But it’s easier said than done.

Cloud migration is tricky. There’s so much to consider. How can you reconfigure architectures and data flows to ensure parity and visibility? How do you know the data in transit is safe and secure? How can you get your job done without getting in trouble with procurement?

Moving databases, applications, services, workloads and IT processes to the cloud is a huge undertaking. So why even bother? Because with big cloud moves come big benefits: optimised performance, reduced management overhead and cost savings on data centres. Cloud drives the scalability, flexibility, agility and reliability that businesses need to succeed in the future.

By incorporating observability into your cloud migration strategy, you can get end-to-end visibility across all layers — infrastructure, applications and services — helping improve deployments and keep costs under control. An observability pipeline that collects, transforms, reduces, enriches, normalises and routes data to any destination can help you achieve full control of your data. By manipulating problematic data sources, an observability pipeline can also make the cloud migration process much smoother and make the whole system run more efficiently after migration than an on-prem solution.

Here are just a few of the ways in which an observability pipeline can help with cloud migration:

  • Routing – Route data to multiple destinations in any cloud, hybrid or on-prem environment, for analysis and/or storage. This gives teams a level of comfort that they can ensure parity between on-prem and cloud deployments and reduce egress charges across zones and clouds – with the added bonus of accelerated data onboarding with normalisation and enrichment in the stream.
  • Normalisation – Prepare the data for expected destination schema ie. Splunk Common Information Model (CIM) or Elastic Common Schema (ECS) to reduce the overhead on preparing and tagging the data after ingestion or in each destination.
  • Optimisation – Send only the relevant data to your cloud tools to free licence headroom and reduce required infrastructure. Some of our Cribl customers have reported up to 70% reductions on both counts. As an added benefit, with only relevant data going into your destinations, you’ll enhance performance across searches, dashboard loading and more.

Why choose Cribl for cloud migration?

Cribl offers tools to help simplify your own toolset while allowing you to validate your data migration every step of the way. Most observability tools work by having agents on hosts stream log, metric, and trace data directly to destination tools. Migration often includes switching these data streams from their on-prem to cloud solutions, and fingers crossed that everything works smoothly.

But the reality is, differences in cloud solutions, tool misconfiguration and missing historical events can lead to data loss. This causes inaccurate reporting and missed security events, and can possibly lead to the need for a dreaded deployment rollback.

Cribl Stream – Cribl’s vendor-agnostic observability pipeline – solves these issues by acting as a first-stop data router. Once your data is flowing into Stream, you can route data to multiple destinations without incurring any extra costs. This means you can have the same data streaming to both your on-prem and your cloud tools simultaneously – allowing to make sure the resulting data is exactly what you expect.

You can even validate your data at multiple points in the Cribl Stream pipeline well before it’s sent to your destinations. Once you’ve confirmed everything looks good, you can then turn off the unneeded route and shut down your on-premises deployment.

As an additional protection, your data can also be routed to low-cost data storage such as Amazon S3. When you need to pull data from storage, Stream’s replay functionality can be used to resend data back through your pipelines and into the necessary tools.

In most observability and security tools, additional knowledge around data is stored in the tools themselves, including information such as normalised fields, additional IP information or masks for sensitive data. During migration, all this knowledge will need to be recreated or copied into the new environment. Cribl Stream can help reduce, optimise and enrich data at a pipeline level so you create the required knowledge objects once in Stream, and that data will be sent to all your destinations – saving your team hours of implementation time.

Data routing in Cribl Stream is extremely powerful. Not only does it allow you to migrate from on-prem to cloud services, but it also gives you the ability to evaluate different solutions and share data across multiple tools. By routing data from existing sources to multiple destinations, you can ensure data parity in your new cloud destinations before turning off your on-premises (or legacy) analytics, monitoring, storage or database products and tooling. Cribl can reduce costs significantly by putting Cribl Stream worker nodes inside your cloud — be it AWS, Microsoft Azure or GCP — to reduce latency and effectively compress and move the data to manage and reduce egress charges.

4Data Solutions is an expert in implementing Cribl Stream. If you are considering implementing an observability pipeline into your cloud migration strategy, talk to us to today.

Call us on +44 330 128 9180 or email info@4datasolutions.com.