We are constructors of worlds, of products previously not imagined. What does it mean to be a constructor and how is this different to just hanging out code someone else told you was needed? Oh.. And we might talk about the science fiction of stanislaw lem and the adventures of his constructor robots.
Distributed Cluster Schedulers are becoming increasingly popular. They present a good abstraction for running workloads at a “warehouse-scale” on the public and private clouds by decoupling workload from compute, network and storage resources.
In this talk, we will talk about the operational challenges of running a Cluster Scheduler to serve highly available services across multiple geographies and in a heterogeneous runtime environment. We will go into details of the needs from a cluster scheduler with respect to managing multiple runtime/virtualization platforms, provide observability, running maintenance on hardware and software, etc.
Find out why Clojure delighted Uncle Bob and why is used by huge corporations like Netflix, Wallmart, Daily Mail, (Allegro is joining this list) and why, regarding Greenspun's tenth rule, you already wrote your software in lisp.
Running, building and deploying microservices is hard. Either if you try to chunk a monolith application into small pieces or want to start a project from scratch, you’ll need to figure out how to deal with: security, service discovery, networking, monitoring, persistence, orchestration and cluster management. Once you manage to have a microservices architecture in place, you’ll hit other challenges: scaling, infrastructure monitoring, building, running and shipping to your users.
In this talk I’ll cover what you need to take into account when you run microservices and how those problems are addressed in MANTL I’ll also look into a continuous delivery pipeline for microservices using Shipped
MANTL is an open source platform for building microservices started by Cisco. It combines the best open source technologies to deliver an out-of-the box open platform for microservices development. You can contribute to MANTL: https://github.com/CiscoCloud/mantl
Shipped is a CI/CD tool that will be released later this year by Cisco and is natively integrated with MANTL. Shipped is in open beta now: ciscoshipped.io
Technological advancements have taken solutions known from server rooms and introduced them into the IoT world. Modern operating systems can now be run on credit card sized computers, and their increasing processing power encourages us to use tools like containers and higher level programming languages on entire new class of devices. But managing complex IoT systems is not an easy task. Luckily software like Jenkins and Ansible can be used to create a well organized IoT environment. Devices with powerful communication capabilities can provide new features for E-commerce Platforms.
This talk will show you how you can get into exciting new world of IoT.
During the presentation we will combine all the technologies mentioned above into a working IoT solution.
Warning - it’s not purely theoretical introduction to IoT for devops - I promise a lot of moving and blinking IoT toys! We’re gonna have fun!
It's already 2016, you have automated build for your application, provisioning and creation of new environments, you have suite of automated tests, automated CD pipeline capable of deploying to production, basically you automate all the things, yet you create CI jobs manually.
I bet you can automate that as well.
In this live coding session I'll show you how to leverage Jenkins job-dsl plugin and its groovy-based DSL to create CD pipelines that you can put under version control and recreate in seconds.
Along the way we'll also take a brief look at Jenkins internals and see some groovy goodies.
Two years ago we set out to build our own monitoring tool replacing Icinga. Our biggest focus was flexiblity and autonomy for the growing number of teams and engineers to enable them to monitor their services from small micro services to databases to higher level business KPIs. Today ZMON provides teams with the a federated monitoring solution that gathers data not only in our DCs but also in the connected AWS VPCs and assists teams with service auto discovery and sharing of checks/alerts to make everyone's life easier. ZMON comes along with Grafana2 and KairosDB enabling rich data driven dashboards.
As ZMON is an open source project and relies on some great products (kairosdb/redis) in the background we also provide some insights into how we build, ship, and deploy exactly the same docker images everyone can try for himself using gocd.
Creating infrastructure for global web and mobile applications can be hard. Creating infrastructure for fast growing global applications can be very hard :) In brainly we had to move from traditional LAMP setup with bare metal servers to something new and cloud was not enough. With software like ansible, mesos, docker, consul, we have designed fully automated immutable setup, even with tests! On this presentation we will show you how, and share with you our exeperince with running this kind of platform.
Containers have become a key component of modern distributed application design. Similarly, Apache Cassandra has emerged as a clear choice for a scalable, high available and performant database in a highly distributed cloud computing architecture.
In this session, we'll walk you through a number of patterns that enabled us to run Cassandra successfully with Docker in production. We will also share the lessons learned in running Cassandra with a very small set of resources that are applicable to both your local development environment and larger, less constrained production deployments.
We'll then introduce you to Amazon ECS, a highly scalable, high-performance service to run and manage distributed applications using Docker. Finally, we will discuss a few best practices used by our customers for running their Cassandra clusters on ECS.
Facebook has a centralized Chef codebase, with hundreds of engineers committing tens of changes every day - and it takes 5 minutes for each diff to land in production. Let's talk about how it's done without causing daily mayhem.
systemd as a core component of most of today's Linux distributions comes with built-in support for containers. It may host containers, it may run inside of containers, it integrates well with containers, and even comes with its own minimal container
In this talk we'll discuss the various integration points systemd provides, and how the various facilities in systemd relate to the more well-known container projects like rkt, LXC or Docker.
DevOps is a state of art describing each software development step as a repeatable, automatable and deterministic process excluding error-prone human factor first time in the history of software development. The model defines the entire value chance from concept to concrete product. It is an evolutionary end for the models of software development and agile movement. But, there is a problem with the concept; though described easily, implemented a little hard.
DevOps Tactical Adoption Theory tries to make the transition process as smooth as possible. It hypothesis each step towards DevOps maturity should bring a visible business value empowering management and team commitment for the next step. The innovative idea here, it is not required to add the tools/processes to stack from sequential beginning to end, but seeking benefit.
The reason behind the theory is to encourage practitioners to apply each step one-by-one and then having the benefits in projects. Consequently, each step is tested in terms of utility and proved method validity for the further steps. In contrast to previous adoption models, our model indicates concrete activities rather than general statements.
Theory built on the claim that many DevOps transition projects considered problematic, impractical or even unsuccessful causing concept to become a goal more than a technique. Basically, theory consists of different areas of interest describing various actions on a schema.
In the session, it is planned to demonstrate “DevOps Tactical Adoption Theory” with focus on Delivery Pipeline/Testing Practices sectioned "Continuous Testing in DevOps".
HumanOps is a set of principles which focus on the human aspects of running infrastructure.
It deliberately highlights the importance of the teams running systems, not just the systems themselves.
The health of your infrastructure is not just about hardware, software, automations and uptime - it also includes the health and wellbeing of your team.
The goal of HumanOps is to improve and maintain the good health of your team: easing communication, reducing fatigue and reducing stress.
A pretty detailed story of how we built a real-time user monitoring platform, gathering data of millions of users. Using the joint forces of CDN, Cloud and BigData, we created a tool for developers and product owners to guide them towards right (and data-driven) product decisions.
According to Forrester's survey the IT & Business decision-makers clearly ranked the customer experience as most important area in company’s digital strategy.
Enhancing the quality, performance, and functionality of customer-facing digital offerings rose to the top, not only when we looked at cumulative first, second, and third rankings,but also when we looked at which strategies garnered the most first-priority designations.
Digital era has come and focusing on DX really pays-off.
+30 slides for 30 min journey to have closer look in the latest changes in the APM and Digital Performance Platforms - as critical enablers for company's digital transformation.
Continuous Deployment is all over software companies. We will look at how to transfer some of the methods to hardware startups. Exploring embedded devices as an iterative process rather than from a traditional engineering approach.
- why is hardware so hard
- hardware test types
- over the air updates
- A case for continuous deployment
- vendors hate you
- libvirt test host hardware abstraction
- A quick look at a pratical jenkins setup
coi.gov.pl is the first government agency in Poland which gone agile. We have adopted Scrum and Kanban as our people framework and software engineering techniques and good practises: XP, DevOps processes: CI, CD, Quality, ChM, RM, BDD, TDD, Risk Management and GIT Flow for the technical counterpart. Here's a story of our problems and solutions we've came-up with. It has been a long journey already, but there's a lot of things to do ahead of us. Let's step into our Case Study for email@example.com
Becoming the next Uber is only possible when bringing your ideas faster to your end users. Some aspects of DevOps are perfect for that as it only works if Ops and Dev work closely together. But what does this mean for you as a developers? Delivering code faster with the high chance of failing faster?
In my opinion we need to look at Key Technical Metrics such as Memory Usage per User or Request, # of SQLs, # of Service Calls, Transferred Bytes, ... - these are metrics you need to track starting at your workstation all the way through CI into Ops – and don’t forget the Business: How often is the new feature really used? What does it cost to run it? Let these metrics act as Quality Gateways and stop builds early before they Crash your System: faster than ever.
In this session we look at how companies like Facebook, CreditOne and Co apply metric-driven DevOps. We look at use cases that crashed rapid deployments, identify metrics that identify the reason of the crash and learn how to use these metrics to steer your pipeline to build better code, deploy faster, without failing faster!
Many of us think about DevOps in terms of "Guinness Book of Records". We
admire tools which can create new vm from image in 30 seconds or create
hundred of servers in an hour. After this we see, that real life is not
so fascinating as in DevOps presentations. Legacy technologies, lack of
credibitlity in organization ("everything works fine as always") and
lack of funds can kill every initiative.
I want to show, how to
design, create and implement "low cost" CI/CD process which influenced
and changed my organization. How to go through the way from scepticism
to "you cannot stop this" state.
Security on the Web is gaining more and more attention from both sides of the fence these days. Intruders become more skillful and well equipped and enterprises try their best to be at least one step ahead. Both sides craft more sophisticated and powerful tools in a an endless arms race. How to keep up and not overwhelm yourself?
Here in Kainos Smart we believe we've got an answer.
This talk is both a reminder of some of the basic principles of Web application security, best practices and a tale of our journey to becoming SOC2 certified. Main focus here is how to adapt to a massive changes from a WebOps perspective.
In large organisations, sometimes projects and parts of the code base get shuffled around between teams. Projects that have tons of responsibilities, that are business critical and that are hard to look after. What happens when a self-organised team that has a continuous delivery system in place receives one of these legacy product that is the antithesis of best practices? The talk approaches this issue with a real life example, highlighting the problems encountered and the solutions suggested and applied to each of them.
The talk starts describing what a hot potato is: a project that breaks all the good practices a team has in place but it’s business critical and needs to be handled. After a general introduction, it gets down into more specific details of the project that need to be addressed: two months deployment cycle, extremely long start-up time, huge amount of responsibilities... And then breaks them down into smaller, more approachable problems.
Once the problems have been reduced to something that can be handled, the talk approaches possible solutions. It explains pros and cons of each of them an why one is chosen over the other: the two-months deployment cycle gets dramatically reduced by creating a bamboo plan owned by the team; the start-up time decrements in several steps that include blue-green deployment plus code refactoring; and many other issues get resolved piece by piece.
Why are these solutions chosen over others? Sometimes for its quality, sometimes it's chosen due to certain constrains like time, resources, adaptability... Reasons for each of them are the centre of the talk.
Finalising the talk, there is a comparison between how the project looked at the beginning and how it behaves after all the improvements.
Monitoring has many names and is often happening by external tools and applications. What would happen if we changed that and instead ask the system under test to perform self check and report it’s status? What would happen if we moved monitoring logic into application? What would happen if we empower developers writing application to implement it's monitoring? This small change of roles opens whole new world of possibilities, let’s explore them together.
Shell command line is surely the best user interface in the world. Unfortunately some disagree with that and avoid using anything that requires a terminal.
At Allegro we operate a petabyte scale, secured Hadoop cluster that is used by more than two hundred of our employees. In this talk we present our experience in creating a user friendly big data ecosystem.
This will include:
* Jupyter Spark notebooks to write and run Spark jobs from a web browser,
* Hue webapp for executing Hive queries and scheduling Oozie workflows,
* Spark deployment platform integrated within Atlassian Bamboo,
* Hadoop desktop client to access HDFS from workstations,
* Active Directory Integration.
All the presented solutions are built on the top of open source projects.