Explore YOW! Lambda Jam 2015
No talks match your criteria.
Try selecting different options or show all talks.
It looks like there are no talks uploaded yet.
Check back again later.
Pattern matching with dependent types in Coq can be awkward, while the same program in Agda might be straightforward and elegant. Yet despite the awkwardness, there may still be reasons to choose Coq for your next dependently-typed development, for example if you want a tactic language to develop domain-specific proof-search procedures.
The purpose of this talk is to demonstrate a small collection of tricks you can use to overcome the awkwardness of pattern matching with dependent types in Coq.
We’ll first review what it means to pattern match on inductive families, contrasting Coq with Agda, and examining what it is about Coq that complicates pattern matching. Using a simple running example, we’ll show how to use Coq “match” annotations to eliminate nonsense cases, and the convoy pattern for refining the types of things already in scope. Finally, we’ll show that by equipping an inductive family with some well-chosen combinators, it is often possible to regain some semblance of elegance.
The techniques described in this talk have been distilled from: Adam Chlipala, Certified Programming with Dependent Types, MIT Press, 2013. http://adam.chlipala.net/cpdt/
All F# developers will be familiar with:
* seq expressions that allow imperative looking code to compute lazy sequences of data on demand.
* async blocks that enable asynchronous operations to be composed together.
Rather than build constructs like seq and async into the language F# has built them on top of a more general feature: computation expressions. This decision enables developers to build their own expression languages for composable abstractions using familiar syntax. This talk will explore the power of computation expressions to deal with a range of real-world problem domains. Examples will include:
* Composing computations that fail Asynchronous paged data sources
* Building typesafe domain specific languages that be can interpreted at runtime
* By writing programs this way we can separate the pure problem domain from the underlying details of the computations.
Today, functional programming is mostly used in the development of server-side software and command line applications. This is changing with Swift, a language with strong support for functional programming that is pitched as the successor of Objective-C for desktop and mobile applications. Unsurprisingly, there are obstacles. Any attempt to use Swift’s support for functional programming to its full potential leads to architectural challenges, especially in the separation of stateful and purely functional computations. In desktop and especially in mobile applications, a superb user experience is the central aim in application design.This typically leads to a ubiquitous use of application frameworks, such as Cocoa, which tend to encourage a tangled web of stateful, mutually mutating objects — very much the anti-thesis of modern functional programming.
Established approaches to disentangling stateful from pure computations, while necessary, are not sufficient for desktop and mobile applications. A careful separation of the view layer (implementing the user interface) from the application logic (forming the computational heart of the application) requires us to rethink application architecture and depends on adopting an interaction paradigm focusing on the data flow between components rather than on object mutation.
This talk summarises my experience with building a desktop application including a complex user interface in Swift and Haskell. The talk will provide answers to the following questions. In which way is functional programming useful in desktop and mobile applications? What are the advantages of stateless, pure functions? What software architecture maximises the impact of functional programming in that context? How can we propagate changes from user interactions, file system & network events, and so on in a manner that keeps an application responsive, while facilitating stateless computations.
Although, the talk mostly resolves around Swift, no previous knowledge of Swift is required and the main ideas translate to writing desktop and mobile applications in other functional languages, such as Scala, F#, or Haskell.
At first it can be tricky working out how exactly you test your code with properties. The good news is that there are some very obvious and useful patterns that can make writing generators and properties easy by mere-mortals.
This talk aims to cover some of those patterns, and in doing so help both people working or struggling with property-based testing, and through that to also motivate the use of property-based testing for those still unfamiliar with the concept.
In this talk we will present a technique based on relative monads that allows one to embed generic syntax involving a monad, m, which can then be simply reused for any relative monad, relative to m.
We will walk through a production use of this at CBA (converted to Haskell), which has allowed us to cheaply add error handling to new monads and the theory behind it. Finally, we will compare this approach with monad transformers and monad layers.
Functional Reactive Programming is a paradigm that expresses changing data as streams of values over time.
Combined with the functional programming concepts introduced in Swift, it greatly reduces code complexity and leads to more expressive and reusable code.
This talk gives an introduction to functional reactive programming using Swift with ReactiveCocoa 3 and shows how it can be used to write software more declaratively.
Even correct software fails.
So what happens if we shift the focus of functional programming to reliable systems? Let’s attack the hard and ugly set of programming problems, the solutions that don’t naturally fall out from building a neat library. Let’s apply our functional programming toolkit to delivering systems end-to-end.
This is a talk on reliable systems. What it takes to build them, how functional programming can and is being leveraged, and perhaps of more interest, where current approaches are letting us down.
Functional Programming, like Object Oriented and Structured Programming, is a quite radical departure from previous programming practices. Like these two previous changes in approach, change is gradual and faces significant resistance.
While the ideas around functional programming are becoming more widely accepted, actually adopting the techniques in old-school imperative teams and code bases presents significant challenges technically, culturally and politically. Exploiting the full potential of FP does require experience, and when the imperative experience vastly outweighs that of FP, importing experiences can be enormously beneficial.
This talk is about experiences introducing functional programming across a large organisation; what works and, perhaps more importantly, what doesn’t. It looks at some specific patterns and anti-patterns that should be considered for people wanting to transform any organisation that has significant existing program assets and software engineering teams. It will help form and present discussion and arguments to achieve meaningful change.
A record system is a type system feature found in many languages, which is used for modeling data using named fields. This talk discusses the capabilities and pitfalls of Haskell’s record system and how the problems can be addressed.
At a language level, we’ll look at how some Haskell-inspired languages improve upon Haskell’s record system. At a library level, we’ll look at how modern GHC features enable alternative record systems to be built.
We’ll also look at the structural typing concepts of extensible records and row polymorphism.
This talk is a case study in library design in Haskell.
Fritz Henglein has shown through a number of excellent papers how to use “discrimination” to do lots of things in O(n): Sorting many more data types than you’d expect, table joins, etc.
In the process of optimizing this approach and wrapping it up in a form that can be easily consumed, we’ll take a lot of detours through the different ways you can think about code when optimizing Haskell.
We’ll need some category theory, from a deeper understanding of monoids to Day convolution.
We’ll need to consider final and initial encodings.
We’ll need to drift down to low level system concerns from building a custom foreign prim to nesting unsafePerformIO within unsafePerformIO.
We’ll need properties of laziness from productivity to IVars.
Along the way we’ll find and fix a small problem with the initial discrimination paper, which opens the door to streaming results, rather than having to wait until all the input is ready.
Monads, monad transformers and free monads are commonly used and understood, but the comonadic equivalents have received a lot less attention. The goal of this talk is to discuss comonads, comonad transformers, and cofree comonads.the different perspective they provide, and how they can be used as a better abstraction for some problems.
The first half of the talk will be building up the intuition for what comonads and comonad transformers are for and how you can use them.
This will be done by comparing and contrasting various comonadic entities with their monadic equivalents, poking around with some concrete instances, and demonstrating some examples of comonads and comonad transformers in use.
The second half of the talk will be about cofree comonads, how they relate to free monads, and how we can use them together to do exciting things.
We’ll start with a quick recap of free monads, which we’ll use to develop a little language which will be the basis for our running example.From this we’ll develop a corresponding cofree comonad while explaining the intuition behind them, after which we’ll show how they can be made to work together.
We’ll end with a demonstration of how this is useful, and a whirlwind tour of some of the opportunities this opens up.
Repa Flow is a library for data parallel data flow programming in Haskell. A flow is a bundle of independent streams, and the library provides operators such as map, fold and filter that apply to all streams in a bundle. Data parallelism is introduced by evaluating each stream in a separate thread on a multi-core machine.Like a souped-up version of the Haskell conduit or pipes library, Repa Flow adds support for efficient chunked streams of unboxed data; bucketed files; and analytic operators such as the SQL-like groupBy and the Hadoop-like shuffle. Repa Flow uses three separate array fusion methods to gain good numeric performance, all while maintaining a pleasant user-facing API.
I’ll use Repa to introduce the Data Parallel Data Flow paradigm, and discuss the general concept of flow polarity, meaning that data is pulled from stream sources but pushed to stream sinks.If a given flow operator (like map or zip) has multiple input or output ports then polarities can be assigned to these ports in various ways. However, only some assignments permit the operator to run in constant space.This fact is intrinsic to the data-flow model, rather than being specific to any particular implementation. In Repa Flow, only operators that run in constant space are exported by the library, which ensures that all programs written with the library also run in constant space by construction.
Keming Labs was commissioned by a client in the energy sector to design and build a custom heavy-equipment inspection tool. The tool was designed with Harel Statecharts and implemented with ClojureScript running on an Android device.
In this talk, Kevin will discuss this approach; in particular, how explicit immutable data yielded fast, on-device prototyping, easy-to-implement visual testing tools, and improved cohesion between developers, designers, and the end customers.
This talk is about a paper “Invertible syntax descriptions”, and some real world use of the ideas therein.
Traditionally, developers have written printers and parsers separately, even if the source and destination are intended to be isomorphic. We will go through the ideas of a paper, and then build up a library for writing JSON printers and parsers at the same time.
By unifying printers and parsers like this, we can reduce duplication, increase maintainability and get some correctness for free.
To understand what’s going on you will need to know:
* Haskell, including a strong grasp of Functor, Applicative, Monad and Alternative type classes.
* What JSON looks like
* Roughly, what “isomorphism” means.
To fully appreciate some of the ideas, you may want to know:
* Basic category theory (functors and isomorphisms)
* What the Control.Lens notion of a Prism is.
When people hear about functors in OCaml they probably immediately think of map and the Functor type class in Haskell. Then when they start looking into what OCaml calls functors they’re likely to be confused, finding something quite unexpected.
This talk will aim to introduce people to the OCaml module system, showing how OCaml modules are different, explain what an OCaml functor is and show how OCaml modules allow us to solve problems in a unique way.
Distributed consensus protocols are notorious for doing people’s heads in. Machines fail and networks lag, which are unfortunate consequences of entropy. Without loss of generality we can assume that the machines despise us, our only hope is to enchant them with correctness proofs, and that we must somehow convince them that safety and liveness are our primary moral concerns.
This talk introduces a new Haskell implementation of Raft, deconstructed as a menagerie of cooperating abstractions for communication, storage and control. A pure protocol description allows local testing and validation of exactly the code that is run on a production cluster. We pause to consider denotational semantics for, like, 30 seconds (“What does it mean to be a protocol?”) before tumbling into the rabbit-hole of code that deals with machines who can’t even agree on what time of day it is.
All attendees will receive a free monad.
Category Theory is known as one of the most terrifyingly abstract branches of maths, yet functional programmers swear by its effectiveness applied to code. How can this be? What makes it so useful for real-world software development?
Ken will answer these questions and more, explaining the basic concepts and terminology of Category Theory, how it reveals deep underlying patterns in our software, and why that matters.
At Atlassian, we’re taking the FP principles seriously to ensure we can deliver major architectural improvements without risk of losing customer data. The last frontier is applying immutability to databases, which, conceptually, we’ve all been brought up to treat as entities that need to be mutable. We are starting to embrace event sourcing - capturing streams of immutable ‘events’ to represent domain models instead of update-in-place. This approach ensures safety, and provides both audit trails by default and flexibility to support practically any schema changes and queries we might require in the future, simply by changing our interpretation of events when we replay them. In this talk, we will describe in detail what event sourcing is, why we use it, actual examples of our domain models implemented in terms of event sourcing, and how we implemented it in Scala using scalaz-stream on top of standard key value stores (AWS DynamoDB).
In modern web applications, the client can do far more than simply showing data retrieved from the server. Clients are powerful enough to perform complex tasks without assistance from the server. The result is a greatly improved experience for the user: Network roundtrips can be avoided by local processing, improving responsiveness, and the application might remain usable even during intermittent loss of connectivity.
Bringing more responsibilities to the client requires the functionality to be duplicated for the browser environment. Additionally, to effectively deal with network latency and service interruptions, the application has to be designed around managing local state on the clients and effectively synchronizing this state with the server.
On some platforms, this would be a daunting task, requiring major chunks of code to be rewritten to make them work on the client. Fortunately, for Haskell we have GHCJS, allowing us to do everything from a single codebase. Haskell’s static type system and strong focus on purity and concurrency make it ideally suited for this type of application.
The talk will show how we can use Haskell and GHCJS to optimally distribute computations between the client and server, relying on Cloud Haskell for type-safe communication. The end result is a heterogeneous distributed system that decides where to perform computations depending on throughput and latency requirements and availability of data. Additionally we will explore how to work with local state on the client and what properties we need from our data structures for effectively dealing with synchronization.
This talk is a case study in application development using Haskell.
I’ll describe retcon - a system to detect and propagate changes in data duplicated across multiple business systems. I’ll explain the business requirements which drove the project, the design of the system, and the implementation in Haskell of two versions:
Version 1.0 was complicated, difficult to deploy, and used a number of advanced type system features to quite peculiar effect.
Version 2.0 is simpler, more capable, easier to deploy, and doesn’t try to showcase all of the language features.
I’ll highlight some of the pitfalls I encountered over the course of the project and ways they could have been avoided.
You should enjoy this talk if you like Haskell, language feature gymnastics, or system development war stories.
“Make illegal state irrepresentable” - Yaron Minsky
Starting from Yaron’s quote we will see where it traditionally applies: values sanitization (escaping), data structures (NonEmptyList), CS constructs (parsers). Does it still apply to the rest of the day-to-day programming where things look… complicated?
In particular we will develop the example of executing applications which can run on single Amazon EC2 instances (for testing) or on full EMR clusters (for production) and which can take data from S3 (meaning that they need to download it and put it somewhere first) or use local data.
Can we make sure we get the right paths? Can we know when it is possible to download data efficiently (using distcopy to go directly from S3 to the cluster for example)? The answer is: create specific data types to model the different cases and tame the complexity.
The take-away: datatypes are not only for “data” but they can also encapsulate “logic” Can we make sure we get the right paths? Can we know when it is possible to download data efficiently (using distcopy to go directly from S3 to the cluster for example)?
The answer is: create specific data types to model the different cases and tame the complexity.
The take-away: datatypes are not only for “data” but they can also encapsulate “logic”