Explore YOW! 2012
No talks match your criteria.
Try selecting different options or show all talks.
It looks like there are no talks uploaded yet.
Check back again later.
R is a domain-specific language for analyzing data. Why does data analysis need its own DSL? What does R do well and what does it do poorly? How can developers take advantage of R's strengths and mitigate its weaknesses? This talk will give some answers to these questions.
GitHub consists of a bunch of employees who have worked at other companies in the past and despised it. Okay, maybe they weren't all terrible jobs, but a lot of us remain skeptical of most software development practices. We do things differently at GitHub. We don't have meetings, we don't have managers, we don't do traditional code review, and we aren't always in the same room, much less on the same continent. And we couldn't be happier about it. We ship code quickly, without a lot of red tape, and still maintain an incredibly high level of code quality. It's a great way to keep your developers happy, and we think it can work in your company, too.
SOA, service-oriented architectures, burst on the scene in the new millennium as the latest technology to support application growth. In concert with the Web, SOA ushered in new paradigms for structuring enterprise applications. At the Forward Internet Group in London, we are implementing SOA in unusual ways. Rather than a few, business-related services being implemented per the original vision, we have developed systems made of myriads of very small, usually short-lived services. In this workshop, we will start by exploring the evolution of SOA implementations by the speaker. In particular, lessons learned from each implementation will be discussed, and reapplication of these lessons on the next implementation. Challenges (and even failures) will be explicitly identified.
In most disciplines built on skill and knowledge, from art to architecture, from creative writing to structural engineering, there is a strong emphasis on studying existing work. Exemplary pieces from past and present are examined and discussed in order to provoke thinking and learn techniques for the present and the future. Although programming is a discipline with a very large canon of existing work to draw from, the only code most programmers read is the code they maintain. They rarely look outside the code directly affecting their work. This talk examines some examples of code that are interesting because of historical significance, profound concepts, impressive technique, exemplary style or just sheer geekiness.
The short answer is that everyone needs them. In order to learn from your mistakes (and what else is there to learn from?), you need to reflect on what happened as objectively as possible, and make a plan of action for what should happen next. And with the emphasis on retrospectives seen in the agile processes, it would be fair to say that most actually get a retrospective on a regular basis. But that is not the case, since facilitating a retrospective is like coding Smalltalk; a minute to learn a lifetime to master. This presentation will, in much less than a lifetime, get you some of the way to create useful retrospectives.
To get the most out of the hardware at our disposal today takes a deep understanding of the entire software and hardware stack. At no place is this more evident than the current CPU architectures. However, great gains are ripe for the taking when dealing with networks and the TCP/IP stack. In this talk, we will discuss some techniques that may be well known or unknown about how to get the most out of the TCP/IP stack of any modern OS. We'll discuss:
(1) how application level batching can be leveraged to remarkably avoid common TCP pitfalls,
(2) how the impact of UDP datagram size influences CPU and network efficiency,
(3) why the new OS system calls like sendmmsg/recvmmsg are so hot, and
(4) how easy it is to leverage asynchronous calls for fun and profit.
Graphs are a fundamental data structure much loved in the scientific domains of graph theory and network science. In recent years, there has been a surge of interest from developers in applying such techniques to solve real-world business problems. Drawing upon experiences with Fortune 50 organisations and time spent producing graph database technology, this presentation will discuss the developer-focused graph tools and demonstrate the types of problems that can be solved with graphs and the technologies that are currently being used to solve such problems.
Big Data has dramatically increased the complexity of building data systems. Big Data forces you to leave the comfortable world of ACID, transactions, and relations, and thrusts you into a challenging world of distributed systems, CAP, and restrictive data models.
You cannot battle complexity with ever more complex systems. This leads to to restrictive systems that are difficult to operate and have poor performance. The only way to reasonably address the complexity of Big Data systems is to fundamentally rethink your approach to avoid that complexity in the first place. A key insight is that the ability to store and process very large amounts of data opens up entirely new ways of building systems that were not possible pre-"Big Data".
NoSQL is not a panacea. Nor is Hadoop, Storm, or any of the other tools out there for Big Data. Yet there is a way to use these tools in conjunction with one another to build complete and robust realtime data systems with a minimum of complexity. These techniques are possible today and can be implemented and operated by small teams.
In this talk you'll learn:
- How a huge amount of complexity stems from the CRUD paradigm, and why you only need (and want) CR
- Why embracing immutability is the key to simplifying data systems
- Where NoSQL fits into the big picture
- The "Lambda Architecture": a generic approach to building data systems using a combination of batch processing and realtime processing
Which APIs stand the test of time, and which ones are breaking changes waiting to happen? It's an important question, because although APIs only account for a tiny portion of your code base, poorly-designed APIs can cost you massive amounts of money, delay your projects, and even hurt your application's user experience.
The Agile movement shifted the relationship between clients and developers in a profound way. In waterfall processes, clients specified large amounts of functionality, then nervously faded into the background until the fateful day-of-delivery. With Agile, developers strove to engage with clients continuously, and delivered much more frequently against their needs. A new trust was established. At the Forward Internet Group in London, we are implementing a second major shift between clients and developers. The trust of the clients in developers evolves into a broader trust of the developers to deliver business value without resorting to a series of well-defined stories. In essence, the business has empowered the developers to do what they think is right for the business. This model, popularized by Facebook, has several labels, but the one we prefer for our flavor is Programmer Anarchy.
In the challenge to reach the lowest possible latencies, as we push the boundaries of transaction processing, the good old fashioned lock imposes too much contention on our algorithms. This contention results in unpredictable latencies when we context switch into the kernel, and in addition limits throughput as Little’s law kicks in. Lock-free and wait-free algorithms can come to the rescue by side-stepping the issues of locks, and when done well can even avoid contention all together. However, lock-free techniques are not for the faint hearted. Programming with locks is hard. Programming using lock-free techniques is often considered the realm occupied only by technical wizards. This session aims to take some of the fear out of non-locking techniques. Make no mistake this is not a subject for beginners but if you are brave, and enjoy understanding how memory and processors really work, then this session could open your eyes with what is possible if you are willing to dedicate the time and effort in this amazing subject area. The attendees will learn the basics of how modern Intel x86_64 processors work and the memory model they implement that forms the foundations for lock-free programming. Empirical evidence will be presented to illustrate the raw throughput and potential latencies that can be achieved when using these techniques.
As developers we all like to use tightly couple systems where possible and loosely coupled ones where necessary. In the closed world of the pre-Cloud era, traditional relational databases have gained tremendous leverage from tight coupling of B-tree storage, transaction managers, and query optimizers, providing developers with an efficient, consistent, and easy to use ACID programming model. In the open, distributed, asynchronous, and heterogeneous world of the Cloud, we must consider more loosely coupled computational models that are designed with distribution and concurrency in from the get go, and accept that our knowledge of the world is never fully consistent. Actors as envisioned by Carl Hewitt fit the bill perfectly. In this talk we will show how highly-available stateful Actors provide a flexible and easy to use programming model for the Cloud on the outside, while still allowing for the traditional programming models on the inside.
Everyday we hear more and more about businesses in the "big data" space, but how many of us really understand how to build an engineering team driven by data? In this session I'll explore my experience building a data driven engineering team from the ground up. I'll walk through how a small team at Intent Media transitioned to using hundreds of terabytes of data a day to do real-time decisioning, dive deep on some of the tough technical issues that we faced, and discuss some of the problems that we're still dealing with, as well as some plans for the future. Most importantly, I'll show how data now drives almost everything we do, and how we're a more focused and productive team because of it.
This talk will provide a unique insight into the core design processes applied by the Creator design team at LEGO. John-Henry will cover all steps along the way, from concept generation to product verification, testing and sign-off. He will show you how real design thinking can influence not only product features, but also how your ideas could be perceived on a global scale. There is more to being a LEGO designer than meets the eye, and this presentation could change the way you think about challenges you might face in your daily work as well. It will appeal to anyone who has an interest in creative design, logical building, or just wants to be an 8 year old for half an hour!
You've seen them. You've probably made one: The architecture vision diagrams. The block diagrams, the message bus topologies... How many of these diagrams ever actually get built?
We never really finish constructing one of these grand visions before something interferes. Maybe it's a merger or acquisition. Maybe your company has a "regime change". (After all, the average tenure of a CIO is down to 18 months!)
The "end state" vision never gets built. Instead, we need to focus on how to flex and change, incorporating new technology, new principles, new business models, and even the last generation's legacy. Call it agile architecture, or meta-architecture, or "how I learned to love laminated stucco." It's architecture without an end state.
With Clojure on the rise it is time you learned how to properly wield your parens. Join Aaron as he walks you through building software with Clojure. You will walk through a real world example; building a library for working with Redis. Building beautiful abstractions on top of data has always been the lisp way, and this example will be no different. You will walk away with a better understanding of Clojure, interfacing with external services, and gain situational knowledge of when and how to use Clojure's macros to help you produce more powerful abstractions.
We live in an era of extraordinary increases in available bandwidth, disk space, and processing power, and an ever faster growing World Wide Web. Now, the Web is about to take a gigantic leap forward when it comes to Web communication and connectivity, a leap that will go very fast and most likely take the established World of Legacy Web solutions by surprise. We have already felt the beginnings of change; the Web is moving into its next phase morphing from a static and stale network to a live, interactive, and constantly changing mesh of communication and connectivity, a living Web. This new living Web will allow us to interact with our friends and colleagues at levels we couldn’t have imagined 5 years ago, solve business problems that seemed impossible, continue to innovate using the Web as a foundation for new solutions benefiting humanity, accessing systems and share information at levels never seen before. The Web as we know it today was only the beginning, now the Web will change everything.
At the forefront of this (r)evolution is HTML5 and some of the new communication features associated with this new Web standard. Features that makes it possible for Web applications to be on par, or even exceed traditional desktop applications while at the same time reduce the burden on our Web middle-tier. This session will explore what is possible and how we might rethink our approach to Web architecture.
There is a question you have answered several times a day, at least since your teens.
It's the question you hear from everyone you meet, from all of nature.
It's the question you ask yourself when you look in the mirror, or look down from a tall building.
It's the question you'll ask yourself in the moment before you die.
Your answer to this single question, so common you've probably never even noticed it, shapes everything about you, and determines your success.
It is the most important minute of your life, and it is the doorway to your future.
Over the last 18 months REA and Hooroo have used cloud computing and continuous delivery to deliver new products to the market faster than any previous launch for their organisations. Come and learn from the real world experience of two Aussie success stories. Hear from a green field business and an established legacy high traffic organisation - two contrasting starting points moving towards a common goal. Learn about how they adopted cloud computing, continuous delivery, build pipelines and more.
- adopting continuous delivery in a green field business
- retrofitting continuous delivery into a complex legacy environment
- experiences adopting IaaS with AWS and PaaS with Engine Yard
- the challenges faced
- the tools and techniques used and developed
We are taught to program computers by using branching, loops and accumulators. When we write our loops we also mix in all sorts of other concerns. This makes for a very difficult situation re parallelization. I will present some ideas from the famous Array Programming languages APL and J and make the case that learning Array Thinking should be in your future.
Very few of the contributors that wrote the original implementation of the compiler that defined the CoffeeScript programming language had any prior experience with compilers. After over two years of hacking on the original compiler, it became difficult to work with, and actually started to hinder the development of the language itself. Michael Ficarra, a long-time collaborator on the project, realised that people had an incentive to fund the development of a newer, better compiler. In exchange for enough cash to keep his student loan creditors at bay, he applied the techniques he learned throughout his research career developing compilers to a new CoffeeScript compiler. In this talk, Michael will display the benefits of the formal compilation and declarative specification strategies he used, and describe how he implemented them in his rewrite of the CoffeeScript compiler.
“Mediocrity guaranteed.” This sad tagline describes most of the processes we use today including typical agile process. It’s easy to see why. Software development’s an expensive risky business. To deal with the risk, the players involved adopt a client-vendor model where those in the client role give requirements and those in the vendor role estimate time and effort and agree to build what’s asked for. In this model we clearly separate responsibilities so that we know who’s accountable when things go wrong. Although we know things rarely go as planned, and innovative ideas rarely spring from such a relationship, we continue to work in processes where treating our coworkers as outsourced vendors is considered “best practice” and risking everything on the ideas of a select few isn’t regarded as risky.
This talk is about an alternative way of working.
In this talk Jeff explores companies beginning to adopt a style of working where everyone in the organization gets involved with identifying and solving problems. You’ll hear examples from real companies describing their practices for learning first-hand about customers and users, practices for collaboratively designing solutions for the problems found in the real world, and approaches to learning if what we created really benefited anyone. This new style of work is a process cocktail combining the best of agile development, lean software development and lean startup, user-centered design, and collaborative design thinking.
This style of work isn’t the traditional client-vendor model where knowing who’s to blame is the primary concern. It’s a co-making style of work where everyone brings their skills and experience to the table and together takes ownership for making great things.
This talk looks at a range of use cases for Big Data and patterns for integration from real world deployments. We present a reference architecture for integrated Big Data processing based on Think Big's experience deploying dozens of Big Data solutions. We look at key patterns and questions based on real world use cases for processing petabyte-scale machine and online user data for batch and near real-time analytics. We address topics like:
* when to choose different technologies whether big data or otherwise
* streaming and batch import of data
* database integration
* query tools and languages
* programming languages and frameworks
* cloud and dedicated hosting
* machine learning and data science tools
* getting organizational buy in
Big companies can create really big software disasters. Some of the worst commercial failures have come from undersized production systems. Millions of dollars get thrown into hardware at the last minute. Just moving to the cloud doesn't help. It lets you get the (virtual) hardware in place faster, but you still might spend a lot more than you expect for capacity. This session will present a technique for analyzing your system's capacity--either during development or as you contemplate changes. Using these techniques could save you a lot of embarrassing downtime and your company a lot of money.
Would you believe that the image of the famous Utah Teapot is rendered using just div elements and CSS without any OpenGL, WebGL, Canvas, or other "real" graphics capabilities?
While the original Live Labs project has long gone to the happy hunting grounds of technology, we salvaged this little pearl as a timeless demonstration of doing a lot with very little.
The title software architect comes with many connotations, and often these are not good. Developers think of hand-waivers who inhabit ivory towers and have forgotten how to write code. Project managers think of technologists who are chasing perfection in initiatives that are serving obscure technical purposes. Yet, for the success of any software project architecture is crucial. In this talk Erik will present his experience on how to address this issue, introducing techniques that help teams come up with good designs and sustainable architectures without the need for a superstar architect. Topics include evolutionary architecture, the seductive power of abstractions, vertical slicing, software visualisations, and the need to experience the consequences of decisions.
This presentation discusses high availability at Heroku, the cloud application platform. We motivate the discussion with specific examples from our experience designing, developing, and operating highly available cloud services at scale. We emphasize that high availability depends critically on the interaction between development and operations, and highlight several patterns that we've found useful in our systems work at Heroku.
Of all the ideas of Lean, batch size reduction is the most important economically. Yet, few product developers understand it. Manufacturers handle physical objects and they can easily see their batch sizes. Developers handle information and batch size is frequently invisible. In fact, 97 percent of product developers have no systematic program to reduce batch size. In this keynote Don Reinertsen will discuss why batch size reduction is such an important tool, and how you can approach it with a bit of science. He will show you why popular manufacturing ideas like one-piece flow can make little sense in product development and how to think about the economics of batch size.
Agile software development was born over a decade ago, with a gathering of industry luminaries in Snowbird, Utah. They were frustrated that so much ceremony and effort was going into so little success, in failed project after failed project, across the software industry. They had each enjoyed amazing successes in their own right, and realised their approaches were more similar than different, so they met to agree on a common set of principles.
Which we promptly abandoned.
The problem is that Agile calls for us to embrace uncertainty, and we are desperately uncomfortable with uncertainty. So much so that we will replace it with anything, even things we know don’t work. We really do prefer the Devil we know.
For the last couple of years Dan has been studying and talking about patterns of effective software delivery. In this talk he explains why Embracing Uncertainty is the most fundamental effectiveness pattern of all, and offers advice to help make uncertainty less scary. He is pretty sure he won’t succeed.
Connascence (noun) is defined as (1) the common birth of two or more at the same time; production of two or more together, (2) That which is born or produced with another, or (3) the act of growing together.
In software, we are told we should reduce the coupling between our modules so that our software is easier to maintain. But what is coupling? Myers (in "Composite/Structured Design") suggests that there are seven levels of coupling, but his nomenclature is developed during the days of Structured programming and does not deal well with objects and classes.
By identifying and classifying how changes in one portion of a software program can effect other places in the program, connascence attempts to define coupling in terms of ways software can changes. In this talk we will examine the different types of connascence and come to understand how coupling effects software development.
Driven by a desire for faster short-request processing and for near real-time responses to critical business events and made increasingly practical by commodity pricing of machines with terabytes of RAM and software to use it we are witnessing an industry wide shift to in-memory data management. In this session I will take you through the evolving landscape of In-Memory Data Grids, In-Memory Data Bases and CEP Engines. Key concepts and metrics will be introduced to equip you with a toolkit for applying in-memory data management. I will also cover emerging standards and make some predictions about the future.
Architecting a complex program out of simpler, independent building blocks has long been recognized as a means to higher programmer productivity and programs that work better. But creating independent, robust, high performance, reusable software components turns out to be remarkably difficult. Most schemes fall well short. I will show how a combination of features of the D programming language enables the creation of best-of-breed components that 'snap together' with ease, with plenty of headroom for user customization and compiler optimization.
Two tasks of increasing importance in distributed computing are:
(1) robustly tracking units of measure, such as weights, distances, energy, monetary currencies
(2) reasoning over data, such as answering symbolic queries
The problem of decoupling dependencies in software has been approached in various ways. Dependency Injection and Inversion of Control are ubiquitous patterns that attempt to address that problem. In this talk, we will take a fresh look at these patterns from the perspective of Functional Programming in Scala. Once we uncover their essence, we find that there is an exceedingly simple purely functional alternative. We identify a deep connection to monads, and discover that the process of creating systems of decoupled software components is ultimately the process of creating programming languages.
In four years, New Relic has grown from a dozen customers to over 25,000 customers, and from collecting hundreds of metrics a day to collecting billions a day. We've done this without a huge investment in hardware - which is very cool. This talk will cover some of the ways we built a system that has scaled so effectively. Sharding, of course, but sharding is not magic; caching, of course, but caching is not magic; hardware improvements, of course, but ... well, you get the idea. There is no one magic solution but a combination a number of strategic decisions.
Most programmers' experience with distributed consensus is either purely theoretical ("well, I read a bunch of Paxos papers") or based entirely on specific systems ("we solve all of our consistency problems with Zookeeper") and this has an unfortunate consequence: many production systems have behavior that is not fully understood by anyone at all. Justin will break down the barrier between theory and practice by discussing exactly what promises various real, deployed systems can make when it comes to consistency in the face of failure.