Explore YOW! 2014
No talks match your criteria.
Try selecting different options or show all talks.
It looks like there are no talks uploaded yet.
Check back again later.
Real-time reactive user interfaces need to scale from handling dozens, and sometimes hundreds, of updates per second, to changes in data on a daily or weekly basis, as well as handling input from users. This means that literally everything is a stream of data.
We will discuss how the trading applications we’ve built make extensive use of reactive extensions to compose these streams to provide real-time, correct information about the state of the market, and the system. We’ll talk about the internal architectures of real time trading applications built to handle this sort of complexity.
How to keep it lean, move fast and leverage your mobile channel.
More and more enterprises are moving their mobile application code bases in-house; these enterprises are struggling to keep up with the demands of the mobile ecosystem. Users expect a seamless experience, they expect new features as well as constant availability of services.
This session focuses on how to apply solid engineering practices to your mobile applications. By understanding common mistakes made, and how to mitigate against the risks. You’ll focus on creating a stable platform for you to continuously develop and deploy through.
With Git you can make it look like you’re the perfect programmer who never makes a mistake! In this fast paced, terminal driven session, learn advanced techniques for undoing almost anything with Git. Starting from the safety of a revert, we’ll dabble with commit –amend, three types of reset and the dreaded reflog. We’ll also look at rebase -i and how to undo things like a merge commit or even a fast forward merge.
On one hand the software development industry is pushing forward, reinventing the way that we build software, striving for agility and craftsmanship at every turn. On the other though, we’re continually forgetting the good of the past and software teams are still failing on an alarmingly regular basis. Software architecture plays a pivotal role in the delivery of successful software yet it’s often neglected. Whether performed by one person or shared amongst the team, the software architecture role exists on even the most agile of teams yet the balance of up front and evolutionary thinking often reflects aspiration rather than reality. By steering away from big up front design and ivory tower architects, many teams now struggle to create a consistent, shared technical vision to work from. This can result in chaos, big balls of mud or software that still fails to meet its goals, despite continuous user involvement.
This talk will explore the importance of software architecture and the consequences of not thinking about it, before introducing some lightweight techniques to bring the essence of software architecture back into an agile environment. We’ll look at creating a shared vision within the development team, effectively communicating that vision and managing technical risk. I’ll also share the strategies that I’ve used to introduce these techniques into agile teams, even those that didn’t think that they needed them.
MicroService Architectures has debuted on the ThoughtWorks Technology Radar as the first technology they address, and with strong recommendations to immediately experiment. In this talk, we will outline the guidelines we have used at two different companies to implement MicroServices. And more importantly, we will tell you about the pitfalls we have encountered.
Julia is well-designed; it’s fun to write and easy to learn, especially for its niche of technical computing. However, one of the biggest draws for new users is its speed. Julia was designed from the beginning to run fast without heroic implementation efforts. This has allowed it to achieve near-C speeds despite still having only a handful of full-time developers. I’ll talk about some of the key things Julia does to be fast, from aggressive specialization to best-effort type inference and beyond. I’ll show what fast Julia code looks like, discuss what makes specific features fast (e.g. multiple-dispatch), and put this all in context with Julia’s “low-magic” design philosophy.
Building robust, quality systems is hard. We trade off organizational issues against technical decisions; the ability to deliver quickly against our ability to change; and the ability to build systems easily against the ability to run those systems in production. However, good architectural decisions can free us to choose the right tools and techniques, allowing us to manage these challenges and concentrate on solving real problems rather than our made up ones.
In this talk, we will run through some stereotypical projects, come to terms with legacy systems, and look at the properties of robust architectures. In particular we are interested in how architectures lend themselves to experimentation and change in terms of both function and technology.
We will attempt to ground the discussion with examples from my past projects. Looking at where things have worked well and probably of more interest, where they really have not.
Companies like Amazon, Google and Netflix have shown that software can provide a powerful competitive advantage to organizations experimenting with disruptive business models. However in more traditional organizations, where IT is “just a department”, it’s easy to be cynical about the transformative power of software development. The main barriers are cultural and architectural – and of course, these concerns are linked.
This talk will begin by presenting the principles that enable rapid software-driven innovation at scale. We will then spend the bulk of the talk discussing how to transform existing organizations, using case studies from several domains. By the end, you will be equipped with battle-tested approaches to better serve customers by harnessing your organization’s true competitive advantage – the ingenuity of its employees.
The prevalence of online attacks against websites has accelerated quickly recently and the same risks continue to be exploited. However, these are often easily identified directly within the browser; it’s just a matter of understanding the vulnerable patterns to look for.
‘Hack Yourself First’ is all about developers building up cyber-offence skills and proactively seeking out security vulnerabilities in their own websites before an attacker does. It recognises that we have huge volumes of existing websites that haven’t gone through sufficient security review plus we continue to create new content that even when built with security in mind, still needs testing from the perspective of a cybercriminal.
In this session we’ll look at website security from the attacker’s perspective and exploit common risks in a vulnerable web application. We’ll also explore ways to easily grab credit cards, gain immediate FTP access to thousands of websites, crack password cryptography you think is secure and hijack wifi.
Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure on the JVM.
Handling streams of data—especially “live” data whose volume is not predetermined—requires special care in an asynchronous system. The most prominent issue is that resource consumption needs to be carefully controlled such that a fast data source does not overwhelm the stream destination. Asynchrony is needed in order to enable the parallel use of computing resources, on collaborating network hosts or multiple CPU cores within a single machine
This presentation will use the new akka-streams project and Scala to demonstrate how streams with back pressure can be managed.
This talk celebrates the awesome parts of the Groovy language and the Groovy ecosystem. You’ll see some exciting examples of Groovy and it’s application. Everything from domain specific languages, dynamic typing, the extensible static typing system, Android programming, concurrency, enterprise programming made productive, functional programming and a host of interesting frameworks and tools.
This talk provides a whirlwind tour of some new types of functional data structures and their applications.
Cache-oblivious algorithms let us perform optimally for all cache levels in your system at the same time by optimizing for one cache for which we don’t know the parameters. While Okasaki’s “Purely Functional Data Structures” taught us how to reason about asymptotic performance in a lazy language like Haskell, reasoning about cache-oblivious algorithms requires some new techniques.
Succinct data structures let us work directly on near-optimally compressed data representations without decompressing.
How can derive new functional data structures from these techniques? Applications include just diverse areas as speeding up something like Haskell’s venerable Data.Map, handling “big data” on disk without tuning for hardware, and parsing JSON faster in less memory.
Becoming a professional programmer in the days of Test Driven Development, many are led to believe that there is only one approach to software design and that everyone else is wrong. But what if you’re also a Type fanatic like me? Can these two methods live in harmony? Should they? In this talk we’ll take a rational look across the spectrum of Test and Type driven design, examining the benefits and limitations of these approaches, as well as the language features that best serve these differing needs.
Stop me if you’ve heard this one: “We used to be small. We made great decisions, got product to the market fast, and were very successful. Now we are big. And slow. Our teams don’t work together very well. Our specialists are spread too thin. Our products are less than awesome.”
Getting teams to work well is hard. Getting teams to work well together is much harder. And the dilemma is, what works in a small organization is often counterproductive at scale. The question is – what do you have to do differently when you grow up?
For starters, scaling is fundamentally a complexity problem, so you should look for ways to reduce and deal with complexity. Second, scaling is a cooperation problem, and understanding what promotes and what destroys cooperation is essential for growing organizations.
Finally, scaling is an organizational problem, and there’s no shortage of models to study for patterns of how to scale organizations. There’s the lean model, the military model, and several unicorn models. These models confirm the fact that scale is possible, and are full of ideas for you to experiment with. But they won’t tell you which approach is best for you – you have to figure that out for yourself.
Devices and applications that connect to each other and the rest of the world have a wealth of protocol options to choose from in terms of how best to perform that connectivity. The REST model, and HTTP, has dominated for a long time. However, REST is about to get an upgrade. HTTP/2 is imminent, WebSocket is here, the needs of applications are rapidly changing, and new IoT protocols are vying for attention amongst the noise. Now is the time for new models to emerge providing end users with new richer experiences, providers with new scaling options, and developers with different approaches.
Let’s take a hands on, deep dive into these new protocols and see exactly what a message-driven, reactive approach might hold to evolving and reimagining some long standing issues and challenge long held beliefs. Also, be some of the first to see some new open source projects in action that leverage these new protocols and approaches.
In ideal Agile development teams build small valuable chunks of functionality. But, that’s easier said than done. Not all products or features are small and breaking them down into small buildable parts is challenging. And, even when you do, how do the people building those small parts not lose sight of the big picture?
Story mapping is a simple practice for telling the story of a whole product or feature starting by telling the stories of the users who’ll use it. In this fast paced workshop you’ll learn the concepts of story mapping by building a map collaboratively with others. You’ll learn advanced techniques for slicing a map to find small viable product releases, and then how to build your product using smaller stories without losing sight of the big picture.
Does TCP not consistently meet your required latency? Is UDP not reliable enough? Do you need to multicast? What about flow control, congestion control, and a means to avoid head-of-line-blocking that can be integrated with the application? Or perhaps you’re just fascinated by how to design for the cutting edge of performance? Maybe you have tried higher level messaging products and found they are way too complicated because of the feature bloat driven by product marketing cycles.
Aeron takes it back-to-basics with a pure focus on performance and reliability. We have built it from the ground up with mechanical sympathy in its DNA. The concurrent data structures are lock-free, wait-free, copy-free, and even persistent for our functional friends. Interaction with the media is layered so you can swap between UDP, Inifiniband, or Shared Memory as required.
This talk will focus on the design of Aeron and what we learned trying to achieve very consistent performance. We will explore the challenges of dealing with reliable message delivery over UDP and the data structures necessary to support transmission, loss detection, and retransmission in in a lock-free and wait-free manner.
Time is money. Understanding application responsiveness and latency is critical not only for delivering good application behavior but also for maintaining profitability and containing risk. But good characterization of bad data is useless. When measurements of response time present false or misleading latency information, even the best analysis can lead to wrong operational decisions and poor application experience. This talk demonstrates common pitfalls encountered in measuring, describing and reporting latency and response time behavior. It then demonstrates the use of recently open sourced tools to improve and gain higher confidence in both latency measurement and reporting.
Working effectively with large volumes of data presents challenging technical and human factors issues. For example, how can today’s analyst iteratively process, display and explore 10s of millions records in order to identify new trends or patterns. Over the last few years, we have been wrestling with some of the more pragmatic aspects of trying to build and deliver a big data analytic solution that meets the needs of the typical analyst.
In this talk, we will discuss several of the technical and human factor issues with making big data accessible to the data scientist or analyst. Topics covered will include what type of interactive analysis support is required for large datasets, what types of visualizations are needed for big data, what type of scripting or programming support is needed for analysts, how to address the typical analyst task flow in a big data solution and the need for non-linear task flow support. In addition, they will discuss some of the generic ways of dynamically visualizing data and the fundamental principles of good visual design for data.
Microservice architectures have generated quite a bit of hype in recent months, and practitioners across our industry have vigorously debated the definition, purpose, and effectiveness of these architectures. In this session, we’ll cut through the hype and examine some very practical considerations related to microservices and how we might solve them:
Not an End in Themselves: why microservices are really all about continuous delivery and how they help us achieve it.
Systems over Services: why microservices are less about the services themselves and more about the systems we can assemble using them. Boilerplate patterns for configuration, integration, and fault tolerance are keys.
Operationalized Architecture: microservices aren’t a free lunch. You have to pay for them with strong DevOps sauce.
It’s About the Data: bounded contexts with API’s are great until you need to ask really big questions. How do we effectively wrangle all of the data at once?
Along the way, we’ll see how open source technology efforts such as Cloud Foundry, Spring Cloud, Netflix OSS, Spring XD, and Hadoop can help us with many of these considerations.
Microservices seem to have taken the tech world by storm in recent months. The promise of flexible architectures that evolve and adapt to changing business models is irresistibly attractive. But in the rush to implement these systems, we’ve seen technologists leave some of the stickiest problems to last. Whether you’re decomposing an unwieldily monolith or starting with greenfield delivery, there are certain universal challenges you will eventually encounter. We’ve been building these systems globally for several years now and witnessed the transition from exuberance through despair to sustainable, steady productivity. In this talk, I’ll dive into three of the biggest issues that microservice teams encounter:
How to secure your microservices
How to manage aggregated data
How to refactor your services as you learn about the domain
To illustrate these points, I’ll draw on my own microservice experiences as well as those of friends and colleagues around the world. You’ll walk away with some practical advice for avoiding these common calamities.
Machine learning and statistical modeling allows us to answer questions such as:
What’s the likelihood that this user will buy from our website?
When will this mechanical part occur a critical failure?
What’s the credit risk of this customer?
This session provides an introduction to machine learning and predictive analytics and discusses how to implement such a predictive model, from access to data sources, data exploration, feature selection and creation, building training and testing sets, machine learning over data, model evaluation and experimentation – and finally – deploy the model as a service.
The metrics that are commonly used for assessing software team productivity are based on outputs in the form of features, user stories or function points, and throughput measures such as story points or cycle time.
The big question is, are these the right metrics? Or are we only measuring these because they are easy?
According to one of the leading research companies, the next emerging trend is to move away from these throughout and output measures to ‘Outcome metrics’. Where throughput measures the effort over time, and outputs measure how much you deliver, outcomes measure the results achieved at the desired quality levels.
In this compelling talk, Gabrielle Benefield will discuss the pitfalls of traditional metrics and how they are not fit for purpose, then provide an alternative approach that teams are adopting worldwide using the Mobius framework.
Gabrielle will walk a case study where they used this method to save a client twelve million pounds annually, after only two days work and dramatically changed the product backlogs for the teams.
Not only can outcome metrics transform the business, they can also be used to assess the technical quality of what is being built, align the customer and suppliers to build ‘the right product’ and give suppliers a competitive edge.
Software is everywhere today, and countless software products and projects die a slow death without ever making any impact. Today’s planning and roadmap techniques expect the world to stand still while we deliver, and set products and projects up for failure from the very start. Even when they have good strategic plans, many organisations fail to communicate them and align everyone involved in delivery. The result is a tremendous amount of time and money wasted due to wrong assumptions, lack of focus, poor communication of objectives, lack of understanding and misalignment with overall goals. There has to be a better way to deliver! Gojko presents a possible solution, impact mapping, an innovative strategic planning method that can help you make an impact with software.
The first generation of human-computer interfaces (HCI) (1950s-70s) used punched cards and line printers as interface devices. It was considered wonderful at the time (I remember). Starting in the 1980s with the Xerox STAR and Smalltalk systems and then the Apple Macintosh, this was replaced by the WIMP (“windows, icons, menus, pointer”) graphical user interfaces that are now commonplace. We have seen another genuine revolution in computing in the last five years, brought on by the introduction of “smartphones” that incorporate able-bodied computers (in terms of MIPS and GBytes) combined with three or more radios (cell, WiFi, Bluetooth, GPS) and a variety of sensors (microphones, multi-touch screens, cameras, accelerometers, compasses, etc.). These devices are growing in popularity as “media hubs” posing as telephones or tablets.
It should be obvious that the future of computing is to be found here, that the coming generations of laptop and desktop computers will integrate I/O devices for multiple media and multiple modes of networking, and that this new generation of HCI systems will be as different from the WIMP model as it was from the punched card world that preceded it. This presentation will draw from the presenter’s 30 years of experience in advanced multimedia computing, and decade of experience teaching graduate courses in multimedia engineering at the UC Santa Barbara. It will consider a series of ancient technologies that are now relevant again as well as posing a set of questions about our assumptions about how people interact with software and services.
Far from alleging to have all the answers, the presenter will re-evaluate several assertions made in the last 30 years regarding the use of “thin” clients, of cloud computing and data storage, on the increasing use of multimedia data, and on support for higher-level data models and interaction modes in end-user application software.
There’s been a Cambrian explosion of new programming languages as of late, prompted by advances in programming languages research and practice and also by the advent of the modular backend LLVM.
D is a serious contender in the systems-level programming area. It supports that claim by being good at everything C and C++ are good at, and also by being good at many tasks that C and C++ are not good at. D also has a compelling interoperability story that allows gradual migration and new modern code that reuses legacy libraries.
This talk is an introduction to D, with emphasis on a few differentiating features.
This talk with discuss Apple’s new Swift programming language for the development of iOS and Mac apps. A language overview and comparison to other popular modern languages will be provided, followed by coverage of the new data structures and functional programming abilities of the language, interoperability with other languages, and a frank discussion of the advantages and shortcomings of using Swift for future development
It makes good sense to follow Google’s lead with technology. Not because what Google does is particularly complex – it isn’t. We follow Google for two reasons:
Google is operating at an unprecedented scale and every mistake they make related to scale is one we don’t have to repeat, while every good decision they make (defined as “decisions that stick”) is one we should probably evaluate;
Google is as strong an attractor of talent as IBM’s labs once were; that much brainpower – even if a large part of it is frittered away on the likes of Wave, Buzz and Aardvark – produces value for all of us.
Using Hadoop is not following Google’s lead. It’s following Yahoo’s lead, or more precisely, venture capitalists who took a weak idea and made an industry of it. MapReduce is behind state-of-the-art to the point that Google discarded it as a cornerstone technology years ago.
The problems of scale, speed, persistence and context are the most important design problem we’ll have to deal with during the next decade.
We must work through what we mean by “big data”, what we mean by “structured” and “unstructured” and why we need new technologies to solve some of our data problems. But “new technologies” doesn’t mean reinventing old technologies while ignoring the lessons of the past. There are reasons relational databases survived while hierarchical, document and object databases were market failures, technologies that may be poised to fail again, 20 years later.
What can following-Google, as a design principle, tell us about scale, speed, persistence and context? Perhaps that workloads are broader than a single application. That synthetic activities downstream from the point where data is recorded are as important as that initial point. Or that relational models of some sort will be in your future.
Making a living building great apps is getting more and more challenging. Since the launch of the iPhone our industry has developed an incredible art and craft around building great software, but that doesn’t necessarily make it sustainable. Drawing inspiration from Panic’s Cabel Sasser, we look at the idea of a Maximum Viable Product by asking ourselves, “What’s the best product the market will bear? How can I find a market for quality software?”
What do organizations do with Hadoop? What are the components in the Hadoop ecosystem used for?
This talk will take you through a story about “DataCo” and how they use various tools in the “Big Data Landscape” to address a handful of business needs that come with data challenges. DataCo might be a made up company, but the use cases exemplified and show cased in this high-level tutorial, are based on real world use cases.
At the end of this talk a set of common use cases as well as a couple of unique ones will be shared, to inspire what really is possible, when organizations start looking at what they can get out of exploring their big data space.
There is a notion endemic to this industry that it is sensible in the long term to learn as you go. I’ll be exploring this fairly soft notion through techniques and insights from category theory, physics, mathematics, computer science, artificial intelligence, and cognitive science to motivate techniques that you can use to invest in yourself over time and to get out of this trap.
Join us to hear about our adventures in a microservice world at realestate.com.au. Learn about the problems that launched our journey, the solutions to our problems, and the solutions to our solutions. We’ll share lessons that we have learned, things that have gone well and less well, where we want to go next, and some of the approaches and tools that we’ve adopted to make the approach sustainable.
Have you ever wondered how our software industry has got itself into the pickle it is currently in? Most projects end up being massively late, costing way more than expected, and delivering big balls of mud that no one truly understands and thus are a nightmare to maintain. In desperation we try out the new approaches we hear about from the analysts and press. Approaches which often have wacky names and sort of make sense, yet, when we try them we still seem to be no better at successfully delivering software than we were a few decades ago.
This talk will be a full scale rant, attacking the technology industry’s sacred cows by exposing the motivations that hide behind them. We’ll show how these motivations lead us into practices that hinder rather than help us deliver quality software, practices that often make our lives just plain miserable.
However, all is not doom and gloom. Some organisations, notably the new breed of online technology lead companies, seem to be achieving things that the traditional corporate IT departments can only dream of. What are they doing differently? We’ll finish by exploring this question and what we can all learn from it.
Imagine having all the power of virtual machines with none of the downsides. That is pretty much what Docker is and it is no surprise it is taking the IT industry by storm. Chef, Puppet and many related tools have been instrumental in getting DevOps & Continuous Delivery off the ground by changing the way we think about infrastructure configuration. Docker is taking it to the next level by changing the way we think about what an application is.
In this talk Erwin will start with the basics of Docker and then show you how it not only takes a lot of complexity out of automating Continuous Delivery, but can also help you with the cultural aspects of DevOps.
Did you know there are thousands of visual tools available to help you solve problems, innovate, and improve everything that you do? But with thousands available from many authors, how do you figure out which tools to run? And when?
Pippi’s Book of the Dead Trading Cards explains a new concept called seestringssm, visual, concatenated equations to see and solve problems.
What is a seestring?
How do you build a seestring?
How do you run one?
How I use seestrings in my Agile practice
Mathematica is a platform for technical computing used by applied mathematicians and engineers. While Mathematica is an immensely powerful and productive system it hasn’t been on the radar of most practitioners of the computer arts. Sadly, the computer science crowd never gets to see the beautiful programming language that is the heart of Mathematica, which has now been named “The Wolfram Language”. Of course, the Mathematica Programming Language is a functional programming language and a dynamically typed one at that. I will expose the audience some of the features of the language that are like nothing else in the world of programming. I hope to give enough of a taste of Mathematica to encourage software developers to give Mathematica or the Wolfram Cloud a try.
Anti-Patterns are like patterns, only more informative. With anti-patterns you will first see what patterns reoccur in “bad” retrospectives and then you will see how to avoid, or remedy, the situation.
Based on my experience with facilitating retrospectives, join me for an entertaining and informative presentation on the anti-patterns I have seen and how to overcome the problems. I also encourage the audience to chip-in with their experiences or questions along the way.
Five years ago, monitoring was just beginning to emerge from the dark ages.
Since then there’s been a cambrian explosion of tools, a rough formalisation of how the tools should be strung together, the emergence of the monitoringsucks meme, the transformation of monitoringsucks into monitoringlove, and the rise of a sister community around Monitorama.
Alert fatigue has become a concept that’s entered the devops consciousness, and more advanced shops along the monitoring continuum are analysing their alerting data to help humans and machines work better together.
But Nagios is still the dominant check executor. Plenty of sites still use RRDtool. And plenty of people are still chained to their pagers, with no relief in sight.
What’s holding us back? What will the next 5 years look like? Will we still be using Nagios? Have we misjudged our audience? What are our biggest challenges?