# Planet Scala

Scala blogs aggregated

## April 20, 2014

### scala-lang.org

#### Scala 2.11.0 is now available!

We are very pleased to announce the final release of Scala 2.11.0!

There have been no code changes since RC4, just improvements to documentation and version bump to the most recent stable version of Akka actors. Here’s the difference between the release and RC4.

Code that compiled on 2.10.x without deprecation warnings should compile on 2.11.x (we do not guarantee this for experimental APIs, such as reflection). If not, please file a regression. We are working with the community to ensure availability of the core projects of the Scala 2.11.x eco-system, please see below for a list. This release is not binary compatible with the 2.10.x series, to allow us to keep improving the Scala standard library.

The Scala 2.11.x series targets Java 6, with (evolving) experimental support for Java 8. In 2.11.0, Java 8 support is mostly limited to reading Java 8 bytecode and parsing Java 8 source. Stay tuned for more complete (experimental) Java 8 support.

### New features in the 2.11 series

This release contains all of the bug fixes and improvements made in the 2.10 series, as well as:

• Collections

• Immutable HashMaps and HashSets perform faster filters, unions, and the like, with improved structural sharing (lower memory usage or churn).
• Mutable LongMap and AnyRefMap have been added to provide improved performance when keys are Long or AnyRef (performance enhancement of up to 4x or 2x respectively).
• BigDecimal is more explicit about rounding and numeric representations, and better handles very large values without exhausting memory (by avoiding unnecessary conversions to BigInt).
• List has improved performance on map, flatMap, and collect.
• See also Deprecation above: we have slated many classes and methods to become final, to clarify which classes are not meant to be subclassed and to facilitate future maintenance and performance improvements.
• Modularization

• The core Scala standard library jar has shed 20% of its bytecode. The modules for xml, parsing, swing as well as the (unsupported) continuations plugin and library are available individually or via scala-library-all. Note that this artifact has weaker binary compatibility guarantees than scala-library – as explained above.
• The compiler has been modularized internally, to separate the presentation compiler, scaladoc and the REPL. We hope this will make it easier to contribute. In this release, all of these modules are still packaged in scala-compiler.jar. We plan to ship them in separate JARs in 2.12.x.
• Reflection, macros and quasiquotes

• Please see this detailed changelog that lists all significant changes and provides advice on forward and backward compatibility.
• See also this summary of the experimental side of the 2.11 development cycle.
• #3321 introduced Sprinter, a new AST pretty-printing library! Very useful for tools that deal with source code.
• Back-end

• The GenBCode back-end (experimental in 2.11). See @magarciaepfl’s extensive documentation.
• A new experimental way of compiling closures, implemented by @JamesIry. With -Ydelambdafy:method anonymous functions are compiled faster, with a smaller bytecode footprint. This works by keeping the function body as a private (static, if no this reference is needed) method of the enclosing class, and at the last moment during compilation emitting a small anonymous class that extends FunctionN and delegates to it. This sets the scene for a smooth migration to Java 8-style lambdas (not yet implemented).
• Branch elimination through constant analysis #2214
• Scala.js, a separate project, provides an experimental JavaScript back-end for Scala 2.11. Note that it is not part of the standard Scala distribution.
• Be more Avian- friendly.
• Compiler Performance

• Incremental compilation has been improved significantly. To try it out, upgrade to sbt 0.13.2 and add incOptions := incOptions.value.withNameHashing(true) to your build! Other build tools are also supported. More info at this sbt issue – that’s where most of the work happened. More features are planned, e.g. class-based tracking.
• We’ve been optimizing the batch compiler’s performance as well, and will continue to work on this during the 2.11.x cycle.
• Improve performance of reflection SI-6638
• The IDE received numerous bug fixes and improvements!

• REPL

• Improved -Xlint warnings

• Warn about unused private / local terms and types, and unused imports.
• This will even tell you when a local var could be a val.
• Slimming down the compiler

• The experimental .NET backend has been removed from the compiler.
• Scala 2.10 shipped with new implementations of the Pattern Matcher and the Bytecode Emitter. We have removed the old implementations.
• Search and destroy mission for ~5000 chunks of dead code. #1648

The Scala team and contributors fixed 613 bugs that are exclusive to Scala 2.11.0! We also backported as many as possible. With the release of 2.11, 2.10 backports will be dialed back.

A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, participating in mailing lists and other public fora, and – of course – submitting and reviewing pull requests! You are all awesome.

Concretely, according to git log --no-merges --oneline master --not 2.10.x --format='%aN' | sort | uniq -c | sort -rn, 112 people contributed code, tests, and/or documentation to Scala 2.11.x: Paul Phillips, Jason Zaugg, Eugene Burmako, Adriaan Moors, Den Shabalin, Simon Ochsenreither, A. P. Marki, Miguel Garcia, James Iry, Iain McGinniss, Rex Kerr, Grzegorz Kossakowski, Vladimir Nikolaev, Eugene Vigdorchik, François Garillot, Mirco Dotta, Rüdiger Klaehn, Raphael Jolly, Kenji Yoshida, Paolo Giarrusso, Antoine Gourlay, Hubert Plociniczak, Aleksandar Prokopec, Simon Schaefer, Lex Spoon, Andrew Phillips, Sébastien Doeraene, Luc Bourlier, Josh Suereth, Jean-Remi Desjardins, Vojin Jovanovic, Vlad Ureche, Viktor Klang, Valerian, Prashant Sharma, Pavel Pavlov, Michael Thorpe, Jan Niehusmann, Heejong Lee, George Leontiev, Daniel C. Sobral, Christoffer Sawicki, yllan, rjfwhite, Volkan Yazıcı, Ruslan Shevchenko, Robin Green, Olivier Blanvillain, Lukas Rytz, James Ward, Iulian Dragos, Ilya Maykov, Eugene Yokota, Erik Osheim, Dan Hopkins, Chris Hodapp, Antonio Cunei, Andriy Polishchuk, Alexander Clare, 杨博, srinivasreddy, secwall, nermin, martijnhoekstra, kurnevsky, jinfu-leng, folone, Yaroslav Klymko, Xusen Yin, Trent Ogren, Tobias Schlatter, Thomas Geier, Stuart Golodetz, Stefan Zeiger, Scott Carey, Samy Dindane, Sagie Davidovich, Runar Bjarnason, Roland Kuhn, Roberto Tyley, Robert Nix, Robert Ladstätter, Rike-Benjamin Schuppner, Rajiv, Philipp Haller, Nada Amin, Mike Morearty, Michael Bayne, Mark Harrah, Luke Cycon, Lee Mighdoll, Konstantin Fedorov, Julio Santos, Julien Richard-Foy, Juha Heljoranta, Johannes Rudolph, Jiawei Li, Jentsch, Jason Swartz, James Roper, Havoc Pennington, Evgeny Kotelnikov, Dmitry Petrashko, Dmitry Bushev, David Hall, Daniel Darabos, Dan Rosen, Cody Allen, Carlo Dapor, Brian McKenna, Andrey Kutejko, Alden Torres.

Thank you all very much.

If you find any errors or omissions in these relates notes, please submit a PR!

### Reporting Bugs / Known Issues

Please file any bugs you encounter. If you’re unsure whether something is a bug, please contact the scala-user mailing list.

Before reporting a bug, please have a look at these known issues.

### Scala IDE for Eclipse

The Scala IDE with this release built in is available from this update site for Eclipse 4.2/4.3 (Juno/Kepler). Please have a look at the getting started guide for more info.

### Available projects

The following Scala projects have already been released against 2.11.0! We’d love to include yours in this list as soon as it’s available – please submit a PR to update these release notes.

"org.scalacheck"                   %% "scalacheck"                % "1.11.3"
"org.scalatest"                    %% "scalatest"                 % "2.1.3"
"org.scalautils"                   %% "scalautils"                % "2.1.3"
"com.typesafe.akka"                %% "akka-actor"                % "2.3.2"
"com.typesafe.scala-logging"       %% "scala-logging-slf4j"       % "2.0.4"
"org.scala-lang.modules"           %% "scala-async"               % "0.9.1"
"org.scalikejdbc"                  %% "scalikejdbc-interpolation" % "2.0.0-beta1"
"com.softwaremill.scalamacrodebug" %% "macros"                    % "0.4"
"com.softwaremill.macwire"         %% "macros"                    % "0.6"
"com.chuusai"                      %% "shapeless"                 % "1.2.4"
"com.chuusai"                      %% "shapeless"                 % "2.0.0"
"org.nalloc"                       %% "optional"                  % "0.1.0"
"org.scalaz"                       %% "scalaz-core"               % "7.0.6"
"com.nocandysw"                    %% "platform-executing"        % "0.5.0"
"com.qifun"                        %% "stateless-future"          % "0.1.1"
"com.github.scopt"                 %% "scopt"                     % "3.2.0"
"com.dongxiguo"                    %% "fastring"                  % "0.2.4"
"com.github.seratch"               %% "ltsv4s"                    % "1.0.0"
"com.googlecode.kiama"             %% "kiama"                     % "1.5.3"
"org.scalamock"                    %% "scalamock-scalatest-support" % "3.0.1"
"org.scalamock"                    %% "scalamock-specs2-support"  % "3.0.1"
"com.github.nscala-time"           %% "nscala-time"               % "1.0.0"
"com.github.xuwei-k"               %% "applybuilder70"            % "0.1.2"
"com.github.xuwei-k"               %% "nobox"                     % "0.1.9"
"org.typelevel"                    %% "scodec-bits"               % "1.0.0"
"org.typelevel"                    %% "scodec-core"               % "1.0.0"
"com.sksamuel.scrimage"            %% "scrimage"                  % "1.3.20"
"net.databinder"                   %% "dispatch-http"             % "0.8.10"
"net.databinder"                   %% "unfiltered"                % "0.7.1"
"io.argonaut"                      %% "argonaut"                  % "6.0.4"
"org.specs2"                       %% "specs2"                    % "2.3.11"
"com.propensive"                   %% "rapture-core"              % "0.9.0"
"com.propensive"                   %% "rapture-json"              % "0.9.1"
"com.propensive"                   %% "rapture-io"                % "0.9.1"
"org.scala-stm"                    %% "scala-stm"                 % "0.7"

The following projects were released against 2.11.0-RC4, with an 2.11 build hopefully following soon:

"org.scalafx"            %% "scalafx"            % "8.0.0-R4"
"org.scalafx"            %% "scalafx"            % "1.0.0-R8"
"org.scalamacros"        %% "paradise"           % "2.0.0-M7"
"com.clarifi"            %% "f0"                 % "1.1.1"
"org.parboiled"          %% "parboiled-scala"    % "1.1.6"
"org.monifu"             %% "monifu"             % "0.4"

### Cross-building with sbt 0.13

When cross-building between Scala versions, you often need to vary the versions of your dependencies. In particular, the new scala modules (such as scala-xml) are no longer included in scala-library, so you’ll have to add an explicit dependency on it to use Scala’s xml support.

Here’s how we recommend handling this in sbt 0.13. For the full build and Maven build, see example.

scalaVersion        := "2.11.0"

crossScalaVersions  := Seq("2.11.0", "2.10.3")

// add scala-xml dependency when needed (for Scala 2.11 and newer)
// this mechanism supports cross-version publishing
libraryDependencies := {
CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, scalaMajor)) if scalaMajor >= 11 =>
libraryDependencies.value :+ "org.scala-lang.modules" %% "scala-xml" % "1.0.1"
case _ =>
libraryDependencies.value
}
}

### Important changes

For most cases, code that compiled under 2.10.x without deprecation warnings should not be affected. We’ve verified this by compiling a sizeable number of open source projects.

Changes to the reflection API may cause breakages, but these breakages can be easily fixed in a manner that is source-compatible with Scala 2.10.x. Follow our reflection/macro changelog for detailed instructions.

We’ve decided to fix the following more obscure deviations from specified behavior without deprecating them first.

• SI-4577 Compile x match { case _ : Foo.type => } to Foo eq x, as specified. It used to be Foo == x (without warning). If that’s what you meant, write case Foo =>.
• SI-7475 Improvements to access checks, aligned with the spec (see also the linked issues). Most importantly, private members are no longer inherited. Thus, this does not type check: class Foo[T] { private val bar: T = ???; new Foo[String] { bar: String } }, as the bar in bar: String refers to the bar with type T. The Foo[String]’s bar is not inherited, and thus not in scope, in the refinement. (Example from SI-8371, see also SI-8426.)

The following changes were made after a deprecation cycle (Thank you, @soc, for leading the deprecation effort!)

• SI-6809 Case classes without a parameter list are no longer allowed.
• SI-7618 Octal number literals no longer supported.

Finally, some notable improvements and bug fixes:

• SI-7296 Case classes with > 22 parameters are now allowed.
• SI-3346 Implicit arguments of implicit conversions now guide type inference.
• SI-6240 Thread safety of reflection API.
• #3037 Experimental support for SAM synthesis.
• #2848 Name-based pattern-matching.
• SI-6169 Infer bounds of Java-defined existential types.
• SI-6566 Right-hand sides of type aliases are now considered invariant for variance checking.
• SI-5917 Improve public AST creation facilities.
• SI-8063 Expose much needed methods in public reflection/macro API.
• SI-8126 Add -Xsource option (make 2.11 type checker behave like 2.10 where possible).

To catch future changes like this early, you can run the compiler under -Xfuture, which makes it behave like the next major version, where possible, to alert you to upcoming breaking changes.

### Deprecations

Deprecation is essential to two of the 2.11.x series’ three themes (faster/smaller/stabler). They make the language and the libraries smaller, and thus easier to use and maintain, which ultimately improves stability. We are very proud of Scala’s first decade, which brought us to where we are, and we are actively working on minimizing the downsides of this legacy, as exemplified by 2.11.x’s focus on deprecation, modularization and infrastructure work.

The following language “warts” have been deprecated:

• SI-7605 Procedure syntax (only under -Xfuture).
• SI-5479 DelayedInit. We will continue support for the important extends App idiom. We won’t drop DelayedInit until there’s a replacement for important use cases. (More details and a proposed alternative.)
• SI-6455 Rewrite of .withFilter to .filter: you must implement withFilter to be compatible with for-comprehensions.
• SI-8035 Automatic insertion of () on missing argument lists.
• SI-6675 Auto-tupling in patterns.
• SI-7247 NotNull, which was never fully implemented – slated for removal in 2.12.
• SI-1503 Unsound type assumption for stable identifier and literal patterns.
• SI-7629 View bounds (under -Xfuture).

We’d like to emphasize the following library deprecations:

• #3103, #3191, #3582 Collection classes and methods that are (very) difficult to extend safely have been slated for being marked final. Proxies and wrappers that were not adequately implemented or kept up-to-date have been deprecated, along with other minor inconsistencies.
• scala-actors is now deprecated and will be removed in 2.12; please follow the steps in the Actors Migration Guide to port to Akka Actors
• SI-7958 Deprecate scala.concurrent.future and scala.concurrent.promise
• SI-3235 Deprecate round on Int and Long (#3581).
• We are looking for maintainers to take over the following modules: scala-swing, scala-continuations. 2.12 will not include them if no new maintainer is found. We will likely keep maintaining the other modules (scala-xml, scala-parser-combinators), but help is still greatly appreciated.

Deprecation is closely linked to source and binary compatibility. We say two versions are source compatible when they compile the same programs with the same results. Deprecation requires qualifying this statement: “assuming there are no deprecation warnings”. This is what allows us to evolve the Scala platform and keep it healthy. We move slowly to guarantee smooth upgrades, but we want to keep improving as well!

### Binary Compatibility

When two versions of Scala are binary compatible, it is safe to compile your project on one Scala version and link against another Scala version at run time. Safe run-time linkage (only!) means that the JVM does not throw a (subclass of) LinkageError when executing your program in the mixed scenario, assuming that none arise when compiling and running on the same version of Scala. Concretely, this means you may have external dependencies on your run-time classpath that use a different version of Scala than the one you’re compiling with, as long as they’re binary compatibile. In other words, separate compilation on different binary compatible versions does not introduce problems compared to compiling and running everything on the same version of Scala.

We check binary compatibility automatically with MiMa. We strive to maintain a similar invariant for the behavior (as opposed to just linkage) of the standard library, but this is not checked mechanically (Scala is not a proof assistant so this is out of reach for its type system).

#### Forwards and Back

We distinguish forwards and backwards compatibility (think of these as properties of a sequence of versions, not of an individual version). Maintaining backwards compatibility means code compiled on an older version will link with code compiled with newer ones. Forwards compatibility allows you to compile on new versions and run on older ones.

Thus, backwards compatibility precludes the removal of (non-private) methods, as older versions could call them, not knowing they would be removed, whereas forwards compatibility disallows adding new (non-private) methods, because newer programs may come to depend on them, which would prevent them from running on older versions (private methods are exempted here as well, as their definition and call sites must be in the same compilation unit).

These are strict constraints, but they have worked well for us in the Scala 2.10.x series. They didn’t stop us from fixing 372 issues in the 2.10.x series post 2.10.0. The advantages are clear, so we will maintain this policy in the 2.11.x series, and are looking (but not yet commiting!) to extend it to include major versions in the future.

#### Meta

Note that so far we’ve only talked about the jars generated by scalac for the standard library and reflection. Our policies do not extend to the meta-issue: ensuring binary compatibility for bytecode generated from identical sources, by different version of scalac? (The same problem exists for compiling on different JDKs.) While we strive to achieve this, it’s not something we can test in general. Notable examples where we know meta-binary compatibility is hard to achieve: specialisation and the optimizer.

In short, if binary compatibility of your library is important to you, use MiMa to verify compatibility before releasing. Compiling identical sources with different versions of the scala compiler (or on different JVM versions!) could result in binary incompatible bytecode. This is rare, and we try to avoid it, but we can’t guarantee it will never happen.

#### Concretely

Just like the 2.10.x series, we guarantee forwards and backwards compatibility of the "org.scala-lang" % "scala-library" % "2.11.x" and "org.scala-lang" % "scala-reflect" % "2.11.x" artifacts, except for anything under the scala.reflect.internal package, as scala-reflect is still experimental. We also strongly discourage relying on the stability of scala.concurrent.impl and scala.reflect.runtime, though we will only break compatibility for severe bugs here.

Note that we will only enforce backwards binary compatibility for the new modules (artifacts under the groupId org.scala-lang.modules). As they are opt-in, it’s less of a burden to require having the latest version on the classpath. (Without forward compatibility, the latest version of the artifact must be on the run-time classpath to avoid linkage errors.)

Finally, Scala 2.11.0 introduces scala-library-all to aggregate the modules that constitute a Scala release. Note that this means it does not provide forward binary compatibility, whereas the core scala-library artifact does. We consider the versions of the modules that "scala-library-all" % "2.11.x" depends on to be the canonical ones, that are part of the official Scala distribution. (The distribution itself is defined by the new scala-dist maven artifact.)

Scala is now distributed under the standard 3-clause BSD license. Originally, the same 3-clause BSD license was adopted, but slightly reworded over the years, and the “Scala License” was born. We’re now back to the standard formulation to avoid confusion.

## April 15, 2014

### Paul Chiusano

#### The future of software, the end of apps, and why UX designers should care about type theory

The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures... Yet the program construct, unlike the poet's words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself. […] The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be.
Fred Brooks
Have you noticed how applications accrete feature after feature but never seem quite capable of doing everything we want? Software is a profound technology with enormous potential, and we stifle this potential with an antiquated metaphor. That metaphor is the machine. Software is now organized into static machines called applications. These applications ("appliances" is a better word) come equipped with a fixed vocabulary of actions, speak no common language, and cannot be extended, composed, or combined with other applications except with enormous friction. By analogy, what we have is a railway system where the tracks in each region are of differing widths, forcing trains and their cargo to be totally disassembled and then reassembled to transport anything across the country. As ridiculous as this sounds, this is roughly what we do at application boundaries: write explicit serialization and parsing code and lots of tedious (not to mention inefficient) code to deconstruct and reconstruct application data and functions.

This essay is a call to cast aside the broken machine metaphor and ultimately end the tyranny of applications. Applications can and ultimately should be replaced by programming environments, explicitly recognized as such, in which the user interactively creates, executes, inspects and composes programs. In this model, interaction with the computer is fundamentally an act of creation, the creative act of programming, of assembling language to express ideas, access information, and automate tasks. And software presents an opportunity to help humanity harness and channel "our vast imaginations, humming away, charged with creative energy".

Though the machine metaphor is wrong for software, it's also understandable why it's persisted. Before the discovery of software, arguably in the 1930s with Alan Turing's invention of the universal Turing machine, human technology had produced only physical artifacts like cash registers, engines, and light bulbs, built for some particular purpose and equipped with a largely fixed vocabulary of actions. With software came the idea that behavior and functionality could be specified as pure information, independent of the machine which interprets them. This raised novel possibilities. As pure information, a program is infinitely copyable at near zero cost, and in the internet age, capable of being transported anywhere on the planet almost instantaneously. A programmer can now miraculously turn thoughts to reality and deploy them around the globe by typing on a keyboard and clicking a few buttons. Though our mindset hasn't caught up yet, software relegated the machine (which once held primacy for the artifacts and technology produced by civilization) to an implementation detail, a substrate for the real technology--the specification of behavior in the form of a program.

We artificially limit the potential of this incredible technology by reserving a tiny, select group of people (programmers) to use its power build applications with largely fixed sets of actions (and we now put these machines on the internet too and call them "web applications"), and whose behaviors are not composable with other programs. Software let us escape the tyranny of the machine, yet we keep using it to build more prisons for our data and functionality!
Bob: Now, wait a minute. Applications usually have an API too, you know. If you really want programmability, why not just use the API?
Alice: I wouldn't say 'usually', but okay, in theory let's suppose that's true. In practice, even though I'm a programmer and could in principle customize the applications I use, I don't because of the friction of doing so. Each application is a universe unto its own, with its own language and idiosyncratic modes of interaction. The situation hasn't improved with web applications, which have somewhat converged on ad hoc JSON+REST protocols as the lingua franca of application programmability.
Bob: What's wrong with that? There are JSON parsers for every programming language under the sun! I even wrote a really fast, push-based, nonblocking parser in 5,000 lines of Java! It's pretty awesome. Check out how I optimized the parsing by hand-coding a switch-statement-based state machine for the parse table to reduce allocation rates and improve cache loc--
Alice: You're missing my point! Compare the overhead of calling a function in the 'native' language of your program vs calling a function exposed via JSON+REST. And no I don't mean the computational overhead, though that is a problem too. Within the language of your program, if I want to call a function returning a list of (employee, date) pairs, I simply invoke the function and get back the result. With JSON+REST, I get back a blob of text, which I then have to parse into a syntax tree and then convert back to some meaningful business objects. If I had that overhead and tedium for every function call I made I'd have quit programming long ago.
Bob: Are you just saying you want more built in types for JSON, then? That's easy, I hear there's even a proposal to add a date type to JSON.
Alice: And maybe in another fifteen years JSON will grow a standard for sending algebraic data types (they've been around for like 40 years, you know) and other sorts of values, like you know, functions
Bob: Functions?? Are you serious? You aren't talking about sending functions across the internet and just executing them, that's a huge security liability!
Alice: Nevermind that for now. My point is--
Bob: --now wait a minute! You know, I was humoring you earlier by saying if you wanted programmability you could just use the application's API. Okay, for the sake of argument I'll grant that this can be rather inconvenient. But so what? You and I both know that 99.9% of users don't want to program or customize; they are perfectly happy with applications that do one thing, and do it well.
Alice: I wouldn't say 'perfectly happy', I'd say that users are resigned to the notion that applications are machines with a fixed set of actions, and any limitations of these machines must simply be worked around via whatever tedium is required. But of course they would think that--we've never shown our users software that didn't work just like a machine, so how could we expect them to know about the wonderful, customizable universe of possibilities that we programmers get to play in every day? This isn't a good state of affairs, it's sad, and we ought to start doing something about it! It isn't hopeless--in fact, I find that if you get users in the right mindset they are positively incessant about wanting to customize their user experience and the actions supported by an application. It's human nature, our inner spirit of creativity and invention that can never be truly squelched! When we are shown something of use or interest to us, some piece of functionality or data, we begin thinking up possible variations and combinations that also interest us or seem useful.
Bob: Okay, but let's be realistic. Do you really expect your users to be booting up text editors, running compilers, interpreting syntax and type errors and so forth just to get something accomplished?
Alice: Of course not--no user should have to put up with the arcane programming environments that we professional programmers have to endure on a daily basis. Then again, we shouldn't have to either! Which is why the goal of software should not be to build machines, but to build pleasing, accessible programming environments that delight and inspire our users to creation while facilitating the sharing and reuse of programming ideas! Yes, we can and should optimize these environments for programming in various domains, which could include graphical views and so forth, but we should still place these environments in a unified framework rather than in walled gardens of functionality like the current batch of (web) appliances... er, 'applications'.
Bob: So what are you saying? Get rid of Microsoft Word, Outlook, Gmail, Twitter, Facebook, and all the rest?
Alice: Yes! Or rather, I would deconstruct these applications into libraries and grant users access to the functions and data types of these libraries within a grand unified programming environment.
Bob: I want to talk more about that... but in any case, these applications you deride aren't just libraries, they are providing an intuitive interface to functionality that people find valuable, and we are going to need some sort of interface to this functionality that's better than a text editor and the command line. Providing this better interface is what applications do
Alice: If the interfaces provided by these applications are so intuitive, why are there rows and rows of 'Missing Manual' and 'For Dummies' books covering just about every application under the sun? Applications are failing at even their stated goal, but they do worse than that. Yes, an application is an (often terrible) interface to some library of functions, but it also traps this wonderful collection of potential building blocks in a mess of bureaucratic red tape. Any creator wishing to build atop or extend the functionality of an application faces a mountain of idiosyncratic protocols and data representations and some of the most tedious sort of programming imaginable: parsing, serializing, converting between different data representations, and error handling due to the inherent problem of having to pass through a dynamically typed and insufficiently expressive communication channel! And that's if an application even exposes any significant portion of its functionality through an actual API, which they often don't. We can do so much better!
Bob: All right, I'll bite. Let's hear your story for how to organize the computing world without applications.

## The world without applications

The 'software as machine' view is so ingrained in people's thinking that it's hard to imagine organizing computing without some notion of applications. But let's return to first principles. Why do people use computers? People use computers in order to do and express things, to communicate with each other, to create, and to experience and interact with what others have created. People write essays, create illustrations, organize and edit photographs, send messages to friends, play card games, watch movies, comment on news articles, and they do serious work too--analyze portfolios, create budgets and track expenses, find plane flights and hotels, automate tasks, and so on. But what is important, what truly matters to people is simply being able to perform these actions. That each of these actions presently take place in the context of some 'application' is not in any way essential. In fact, I hope you can start to see how unnatural it is that such stark boundaries exist between applications, and how lovely it would be if the functionality of our current applications could be seamlessly accessed and combined with other functions in whatever ways we imagine. This sort of activity could be a part of the normal interaction that people have with computers, not something reserved only for 'programmers', and not something that requires navigating a tedious mess of ad hoc protocols, dealing with parsing and serialization, and all the other mumbo-jumbo that has nothing to do with the idea the user (programmer) is trying to express. The computing environment could be a programmable playground, a canvas in which to automate whatever tasks or activities the user wished.

Let me give an example of the problems with the current application-oriented model, and show what possibilities are put out of reach by our current framing of software. Please don't get bogged down in the details, I'm just trying to be illustrative here.

Suppose Carol and Dave are a young, conscientious couple intent on being disciplined about saving for retirement. But, they want to enjoy their time together as well, and so as part of their budget, which they manage using Mint.com, they allocate 200 per month to a virtual 'vacation' fund which accumulates from month to month. They also keep a shared Google doc in which they both jot down ideas for places they'd like to go and things they might like to do. Periodically, they take a vacation, drawing ideas from this doc. They make sure to keep the total cost of the trip under the amount that has accumulated into their vacation fund, and then attribute the cost of the trip to their vacation budget so it is deducted by Mint.com. So far so good, but Carol, who is the planner in the relationship, notices that whenever she plans a vacation for the two of them she's doing a similar sort of thing. First, she opens up the Mint.com application to see how much money has accumulated in their vacation fund. Next, she opens up the Google doc to remind herself of the possible locations for trips they could take. Then, she goes to Kayak.com and searches for plane flights under the budget price, taking care to reserve enough leftover money for booking a hotel (say, on Hotels.com) and whatever other expenses are to be expected on the trip. It's a complex process, with lots of information and factors to keep straight, and it must be repeated each and every time they wish to plan a trip. Carol wonders if it would be possible to automate this process somehow, at least partially. She'd like a program that extracts a list of locations from their shared Google doc, then gets a list of possible flights to these locations and a list of possible hotels, then filters out any flight+hotel combinations that exceed the budget, then gives her the opportunity to interactively filter and browse through possible results, perhaps even allowing for interactive adjustments to certain base assumptions like the daily cost of miscellaneous expenses while on vacation, the dates of the trip, etc. This would save a lot of time and make the planning process more fun and creative. Unfortunately, this sort of thing just isn't possible today. Kayak.com and Mint.com both lack APIs! Mint lets users download their transaction history, but this history doesn't indicate how much money has accumulated in each budget category. Kayak is even worse--it lacks a search API entirely. So it seems Carol and Dave would be reduced to screen scraping if they want to programmatically build on Kayak and Mint. Google docs at least comes equipped with an API, but it's an ad hoc XML over REST API and there's friction associated with its use due to having to parse XML and so on. Overall, the friction and overhead to implementing this automation idea is way too high to justify it, so Carol doesn't bother and just does everything manually, or worse, gives up on a dream vacation! Now let's imagine how things could be. Kayak, Mint, and Google docs would be, first and foremost, libraries rather than applications. Each might come equipped with custom views or editing environments for writing and executing certain 'shapes' of programs, but these views would not be their primary (or only) mode of interaction, as they are now. Instead, the collection of functions and data types in these libraries would be primary, and accessible within the unified programming environment of the user's desktop. This programming environment, moreover, would allow for transparent access to remote functionality, so users could write programs that call functions exposed via cloud services as well as functions defined locally. If that example seems contrived, here's a more 'serious' one: a widget-making business has a customer relationship management (CRM) application that's used by the sales team. For each potential client, they make notes about what widget features clients are most interested in. The company also uses some project management software that lets them track features, improvements, and fixes to the product, and group these into releases. Whenever the company rolls out a new version of the widget product, the sales team would like to cross reference the list of changes that can be extracted from the project management software with the list of all the clients or leads that would be interested in these changes. Moreover, it would be nice to be able to take this list of potential clients who might be interested in newly released features and perhaps even assemble a form email calling out the particular features or improvements in the new version that that particular client was interested in. The sales team can of course add any personal touches to the emails before sending them to the potential clients. Today, this process might end up being done manually, which doesn't scale very well if a business has hundreds or thousands of 'live' sales leads and a large number of features that they roll out with each release. Even if both the CRM and the project management app come with APIs, there is quite a bit of friction involved in writing a program that 'speaks' both APIs and handles all the boring concerns like parsing, deserialization, error handling, and so on. I just made up these use cases, and I could come up with hundreds of others. No one piece of software 'does it all', and so individuals and businesses looking to automate or partially automate various tasks are often put in the position of having to integrate functionality across multiple applications, which is often painful or flat out impossible. The amount of lost productivity (or lost leisure time) on a global scale, both for individuals and business, is absolutely staggering. Bob: All right, I think I finally see what you're getting at. These are very old ideas, you know. Haven't you ever heard of the Unix Philosophy? In fact, I could probably implement most of your use cases with 'a very small shell script'. Alice: You make it sound like Thompson and Ritchie invented the idea of composition. Mathematicians have been composing functions for hundreds (or even thousands) of years before that without making such a fuss about it or waving any sort of philosophical flag. But anyway, I would love to see you try to implement those tasks with a shell script, as you say. Have you ever tried reading a shell script written by someone else that's longer than 10 lines or so? I'm a professional programmer, well-trained in navigating all the arcane nonsense that's common in software, and a small part of me dies every time I have to write or read a bash script. I appreciate the spirit of the Unix Philosophy, but the implementation, of writing programs in a terrible language that read and write 'vaguely parseable text' leaves a lot to be desired. And JSON and XML aren't much better, either. Bob: So you really think that Carol and some sales guys are going to be writing programs, even if it is some theoretical future souped-up graphical programming environment? Alice: Why does that seem so unlikely to you? Bob: Because writing software is complicated! I know because I'm a professional programmer. We can't expect the masses to be writing the sort of complex program that we professional programmers produce. Alice: 'Complex programs'? You mean like Instagram? A website where you can post photos of kittens and subscribe to a feed of photos produced by other people? Or Twitter? Or any one of the 95% of applications which are just a CRUD interface to some data store? The truth is, if you strip applications of all their incidental complexity (largely caused by the artificial barriers at application boundaries), they are often extremely simple. But in all seriousness, why can't more people write programs? Millions of people use spreadsheets, an even more impoverished and arcane programming environment than what we could build. Bob: Maybe so, but I still don't think that a programming environment can ever be accessible to the majority of people. Spreadsheets are a good example--they are a rather accessible (if limited) form of programming, and not everyone uses the programmability of spreadsheets or even wants to! Alice: And two thousand years ago, most of the population was illiterate and arithmetic was considered too difficult for the average person, yet now we teach kids these things in elementary school. The truth is, we don't really know how many people might program if given a learnable programming environment and programming were reduced to its exhilarating, creative essence. I worry we have raised generations of programmers who are simply very good at tolerating bullshit and, paraphrasing Paul Lockhart, the most talented programmer of our time may be a waitress in Tulsa, Oklahoma who considers herself bad at computers. The spreadsheet brought programming (in a limited fashion) to millions of people, and a more accessible environment could bring it to millions or billions more. Who are you, with your limited imagination, to place a ceiling on how accessible programming could be? Well, the world is what we make of it, and I want to make a world in which applications die off, programming is no longer the awkward, arcane and tedious process it often is today, and where the internet is used to transparently share, use, and compose functionality across the internet. Which brings me to my next point... ## What's wrong with the internet The internet contains vast pools of data and functionality largely trapped within noncomposable applications all competing to be the center of the universe. The economy of the internet is deeply broken. Have you ever wondered why the internet market is dominated by a few huge businesses like Google, Facebook, Twitter, etc? High transaction costs imposed by application boundaries have distorted the software economy, making it artificially expensive to integrate functionality from third-parties. This selects for larger businesses with the resources to develop and integrate functionality internally, which they do using composable libraries within their own application boundaries. From here, network effects due mostly to high switching costs (again, because of application boundary friction) sustain the positions of these larger market players. We essentially have a situation in which these larger market players own a significant portion of the network effects on the web. It would be preferable if ownership of these network effects were transferred to the public domain and businesses were forced to compete on their ideas and cleverness in describing these ideas in software, rather than competing as they do now on how well they can coax users into entering various walled gardens and keep them there with lock-in and high switching costs. With a unified programming environment spanning the web (I'll say more about this in another post), we could see these transaction costs and switching costs drop to nearly zero and a radical democratization of the internet market as ownership of these network effects is transferred to the public domain. Unlike the production of many physical goods and services, software does not have any natural economies of scale. Arguably, there are diseconomies of scale with software--per unit of functionality, software becomes harder to write with the addition of more people, resources and code, because of the complexity of managing a large codebase and coordinating concurrent development. Large businesses with significant codebases fight a constant (losing) battle against entropy and employ armies of developers to maintain and make rudimentary additions to functionality. The 'economies of scale' with software are almost entirely due to artificially high transactions costs caused by the application-centered world view and the lack of a unified computational framework owned by the public. As a civilization, we would be better off if software could be developed by small, unrelated groups, with an open standard that allowed for these groups to trivially combine functionality produced anywhere on the network. What I am proposing is a radical shift that could mean the end of huge internet businesses like Google and Facebook. Or rather, it means that Google and Facebook would be forced to compete on functionality with programmers all over the world, any of whom could write similar functionality that could be substituted for Google/Facebook functionality with literally zero switching costs. Oh, I might use Google as 'cloud provider', a place to stick my data and my computations, but this would be using Google as a commodity, an implementation detail, much the way I use the physical computer on which I type this right now. At any point, I could choose to transfer my data and personal functionality to another cloud provider, again with zero switching costs. And while we're at it, perhaps we could dispense with cloud providers entirely and replace them with a peer-to-peer network in which individuals share compute time and local storage! Bob: I wouldn't knock Google, Twitter, and Instagram... they are serving literally millions of concurrent users. That's a serious technical challenge, you know. Alice: A serious technical challenge that has been created artificially! In the world I envision, the (limited) functionality of sites like Twitter could be written as a library and then used in a decentralized way by anyone connected to the internet. Writing such a library would require no servers, no capital, and could be completed by a programmer (or user) in a weekend! Think about it--if I write quicksort as a library function, is there any 'serious technical challenge' in making it possible for my function to be used by millions of users? No, of course not, because my function is pure information and can be transported all over the world and run by a billion people simultaneously, without my having to do anything other than put the code somewhere connected to the internet. But for some strange reason, if I write a function that operates on the follows-graph maintained in an (unnecessarily) centralized way by Twitter, I need to deal with all sorts of complexity if I want this function to be used by more than a few hundred people concurrently? Twitter (and Facebook, and Instagram, and Google) are solving problems created by the 'application as center of the universe' viewpoint that is so common today. Bob: Even so, I think you are vastly underestimating the complexity of the software that these companies produce. These companies are coordinating the activities of fleets of computers, doing error handling and recovery, and wrapping up often complex functionality in nice, usable interfaces (which by the way have seen many man months worth of tuning and testing) that you do nothing but complain about! We have it so easy! Alice: And yet, I still can't get Gmail to do even simple tasks like schedule an email to be sent later or batch up all incoming emails containing a certain phrase into a weekly digest! By the way, I just thought up those use cases on the spot, I could think of dozens more that aren't supported. The problem is, I don't want a machine, I want a toolkit, and Google keeps trying to sell me machines. Perhaps these machines are exquisitely crafted, with extensive tuning and so forth, but a machine with a fixed set of actions can never do all the things that I can imagine might be useful, and I don't want to wait around for Google to implement the functionality I desire as another awkward one-off 'feature' that's poorly integrated and just adds more complexity to an already bloated application. Bob: Okay, if you want a toolkit, how about Yahoo! Pipes, or If This Then That? Aren't those sort of what you want? Alice: Absolutely not. For one, I don't want my data and functionality locked up with a particular provider like that. I want an open platform. Who knows when Yahoo! might kill off Pipes or start changing inordinate sums of money for it, and who knows if ITTT is going to even be around a year from now given that they seem to have no business model. I would only use these services for throw-away code I don't care about. Have you ever noticed that all the programming languages people use voluntarily are open source? I think it's because no one wants their creations owned by anyone. But beyond that, the bigger reason I don't like these services is that I want a real programming language, with a real type system that lets me assemble complex functionality with ease and guides me through the process. ## Why UX designers should care about type theory Applications are bad enough in that they trap potentially useful building blocks for larger program ideas behind artificial barriers, but they fail at even their stated purpose of providing an 'intuitive' interface to whatever fixed set of actions and functionality its creators have imagined. Here is why: the problem is that for all but the simplest applications, there are multiple contexts within the application and there needs to be a cohesive story for how to present only 'appropriate' actions to the user and prevent nonsensical combinations based on context. This becomes serious business as the total number of actions offered by an application grows and the set of possible actions and contexts grows. As an example, if I just have selected a message in my inbox (this is a 'context'), the 'send' action should not be available, but if I am editing a draft of a message it should be. Likewise, if I have just selected some text, the 'apply Kodachrome style retro filter' action should not be available, since that only makes sense applied to a picture of some sort. These are just silly examples, but real applications will have many more actions to organize and present to users in a context-sensitive way. Unfortunately, the way 'applications' tend to do this is with various ad hoc approaches that don't scale very well as more functionality is added--generally, they allow only a fixed set of contexts, and they hardcode what actions are allowed in each context. ('Oh, the send function isn't available from the inbox screen? Okay, I won't add that option to this static menu'; 'Oh, only an integer is allowed here? Okay, I'll add some error checking to this text input') Hence the paradox: applications never seem to do everything we want (because by design they can only support a fixed set of contexts and because how to handle each context must be explicitly hardcoded), and yet we also can't seem to easily find the functionality they do support (because the set of contexts and allowed actions is arbitrary and unguessable in a complex application). There is already a discipline with a coherent story for how to handle concerns of what actions are appropriate in what contexts: type theory. Which is why I now (half) jokingly introduce Chiusano's 10th corollary: Any sufficiently advanced user-facing program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of a real programming language and type system. Programming languages and type theory have largely solved the problem of how to constrain user actions to only 'appropriate' alternatives and present these alternatives to users in an exquisitely context-sensitive way. The fundamental contribution of a type system is to provide a compositional language for describing possible forms values can take, and to provide a fully generic program (the typechecker) for determining whether an action (a function) is applicable to a particular value (an argument to the function). Around this core idea we can build UI for autocompletion, perfectly appropriate context menus, program search, and so on. Type systems provide a striking, elegant solution to a problem that UX designers now solve in more ad hoc ways. These ad hoc methods don't scale and can never match what is possible when guided by an actual type system and the programming environment to go with it. The work that remains is more around how to build meaningful, sensitive, real-time interfaces to the typechecker and integrate it within a larger programming environment supporting a mixture of graphical and textual program elements. Note that the richer the type system, the more mileage we get out of this approach. ## Conclusion I'll conclude with a great quote by Rúnar Bjarnason, explaining how we got to this point, and why it is deeply wrong: In the early days of programming, there were no computers. The first programs were written, and executed, on paper. It wasn't until later that machines were first built that could execute programs automatically. During the ascent of computers, an industry of professional computer programmers emerged. Perhaps because early computers were awkward and difficult to use, the focus of these professionals became less thinking about programs and more manipulating the machine. Indeed, if you read the Wikipedia entry on "Computer Program", it tells you that computer programs are "instructions for a computer", and that "a computer requires programs to function". This is a curious position, since it's completely backwards. It implies that programming is done in order to make computers do things, as a primary. I’ll warrant that the article was probably written by a professional programmer. But why does a computer need to function? Why does a computer even exist? The reality is that computers exist solely for the purpose of executing programs. The machine is not a metaphysical primary. Reality has primacy, a program is a description, an abstraction, a proof of some hypothesis about an aspect of reality, and the computer exists to deduce the implications of that fact for the pursuit of human values. Though the post talks specifically about not creating our programming languages in the machine's image, we should apply the same reasoning to the useful bundles of data and functionality that we now call 'applications'. So there you have it. The machines are no longer primary. End the tyranny of applications! ## April 07, 2014 ### scala-lang.org #### Scala 2.11.0-RC4 is now available! We are very pleased to announce Scala 2.11.0-RC4, the next release candidate of Scala 2.11.0! Download it now from scala-lang.org or via Maven Central. Since RC3, we’ve fixed two blocker bugs, and admitted some final polish for macros and quasiquotes. Here’s the difference between RC4 and RC3. Please do try out this release candidate to help us find any serious regressions before the final release. The next release candidate (or the final) will be cut on Friday April 11, if there are no unresolved blocker bugs. Our goal is to have the next release be the final – please help us make sure there are no important regressions! Code that compiled on 2.10.x without deprecation warnings should compile on 2.11.x (we do no guarantee this for experimental APIs, such as reflection). If not, please file a regression. We are working with the community to ensure availability of the core projects of the Scala 2.11.x eco-system, please see below for a list. This release is not binary compatible with the 2.10.x series, to allow us to keep improving the Scala standard library. For production use, we recommend the latest stable release, 2.10.4. The Scala 2.11.x series targets Java 6, with (evolving) experimental support for Java 8. In 2.11.0, Java 8 support is mostly limited to reading Java 8 bytecode and parsing Java 8 source. Stay tuned for more complete (experimental) Java 8 support. The Scala team and contributors fixed 613 bugs that are exclusive to Scala 2.11.0-RC4! We also backported as many as possible. With the release of 2.11, 2.10 backports will be dialed back. Since the last RC, we fixed 11 issues via 37 merged pull requests. A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, participating in mailing lists and other public fora, and – of course – submitting and reviewing pull requests! You are all awesome. Concretely, according to git log --no-merges --oneline master --not 2.10.x --format='%aN' | sort | uniq -c | sort -rn, 111 people contributed code, tests, and/or documentation to Scala 2.11.x: Paul Phillips, Jason Zaugg, Eugene Burmako, Adriaan Moors, Den Shabalin, Simon Ochsenreither, A. P. Marki, Miguel Garcia, James Iry, Denys Shabalin, Rex Kerr, Grzegorz Kossakowski, Vladimir Nikolaev, Eugene Vigdorchik, François Garillot, Mirco Dotta, Rüdiger Klaehn, Raphael Jolly, Kenji Yoshida, Paolo Giarrusso, Antoine Gourlay, Hubert Plociniczak, Aleksandar Prokopec, Simon Schaefer, Lex Spoon, Andrew Phillips, Sébastien Doeraene, Luc Bourlier, Josh Suereth, Jean-Remi Desjardins, Vojin Jovanovic, Vlad Ureche, Viktor Klang, Valerian, Prashant Sharma, Pavel Pavlov, Michael Thorpe, Jan Niehusmann, Heejong Lee, George Leontiev, Daniel C. Sobral, Christoffer Sawicki, yllan, rjfwhite, Volkan Yazıcı, Ruslan Shevchenko, Robin Green, Olivier Blanvillain, Lukas Rytz, Iulian Dragos, Ilya Maykov, Eugene Yokota, Erik Osheim, Dan Hopkins, Chris Hodapp, Antonio Cunei, Andriy Polishchuk, Alexander Clare, 杨博, srinivasreddy, secwall, nermin, martijnhoekstra, jinfu-leng, folone, Yaroslav Klymko, Xusen Yin, Trent Ogren, Tobias Schlatter, Thomas Geier, Stuart Golodetz, Stefan Zeiger, Scott Carey, Samy Dindane, Sagie Davidovich, Runar Bjarnason, Roland Kuhn, Roberto Tyley, Robert Nix, Robert Ladstätter, Rike-Benjamin Schuppner, Rajiv, Philipp Haller, Nada Amin, Mike Morearty, Michael Bayne, Mark Harrah, Luke Cycon, Lee Mighdoll, Konstantin Fedorov, Julio Santos, Julien Richard-Foy, Juha Heljoranta, Johannes Rudolph, Jiawei Li, Jentsch, Jason Swartz, James Ward, James Roper, Havoc Pennington, Evgeny Kotelnikov, Dmitry Petrashko, Dmitry Bushev, David Hall, Daniel Darabos, Dan Rosen, Cody Allen, Carlo Dapor, Brian McKenna, Andrey Kutejko, Alden Torres. Thank you all very much. If you find any errors or omissions in these relates notes, please submit a PR! ### Reporting Bugs / Known Issues Please file any bugs you encounter. If you’re unsure whether something is a bug, please contact the scala-user mailing list. Before reporting a bug, please have a look at these known issues. ### Scala IDE for Eclipse The Scala IDE with this release built in is available from this update site for Eclipse 4.2/4.3 (Juno/Kepler). Please have a look at the getting started guide for more info. ### Available projects The following Scala projects have already been released against 2.11.0-RC4! We’d love to include yours in this list as soon as it’s available – please submit a PR to update these release notes. "org.scalacheck" %% "scalacheck" % "1.11.3" "com.typesafe.akka" %% "akka-actor" % "2.3.0" "org.scalatest" %% "scalatest" % "2.1.3" "org.scala-lang.modules" %% "scala-async" % "0.9.1" "org.scalafx" %% "scalafx" % "8.0.0-R4" "com.chuusai" %% "shapeless" % "1.2.4" "com.chuusai" %% "shapeless" % "2.0.0" "org.scalamacros" %% "paradise" % "2.0.0-M7" "org.scalaz" %% "scalaz-core" % "7.0.6" "org.specs2" %% "specs2" % "2.3.10" The following projects were released against 2.11.0-RC3, with an RC4 build hopefully following soon: "org.scalafx" %% "scalafx" % "1.0.0-R8" "com.github.scopt" %% "scopt" % "3.2.0" "com.nocandysw" %% "platform-executing" % "0.5.0" "io.argonaut" %% "argonaut" % "6.0.3" "com.clarifi" %% "f0" % "1.1.1" "org.parboiled" %% "parboiled-scala" % "1.1.6" "com.sksamuel.scrimage" %% "scrimage" % "1.3.16" "org.scala-stm" %% "scala-stm" % "0.7" "org.monifu" %% "monifu" % "0.4" ### Cross-building with sbt 0.13 When cross-building between Scala versions, you often need to vary the versions of your dependencies. In particular, the new scala modules (such as scala-xml) are no longer included in scala-library, so you’ll have to add an explicit dependency on it to use Scala’s xml support. Here’s how we recommend handling this in sbt 0.13. For the full build and Maven build, see example. scalaVersion := "2.11.0-RC4" crossScalaVersions := Seq("2.11.0-RC4", "2.10.3") // add scala-xml dependency when needed (for Scala 2.11 and newer) // this mechanism supports cross-version publishing libraryDependencies := { CrossVersion.partialVersion(scalaVersion.value) match { case Some((2, scalaMajor)) if scalaMajor >= 11 => libraryDependencies.value :+ "org.scala-lang.modules" %% "scala-xml" % "1.0.1" case _ => libraryDependencies.value } } ### Important changes For most cases, code that compiled under 2.10.x without deprecation warnings should not be affected. We’ve verified this by compiling a sizeable number of open source projects. Changes to the reflection API may cause breakages, but these breakages can be easily fixed in a manner that is source-compatible with Scala 2.10.x. Follow our reflection/macro changelog for detailed instructions. We’ve decided to fix the following more obscure deviations from specified behavior without deprecating them first. • SI-4577 Compile x match { case _ : Foo.type => } to Foo eq x, as specified. It used to be Foo == x (without warning). If that’s what you meant, write case Foo =>. • SI-7475 Improvements to access checks, aligned with the spec (see also the linked issues). Most importantly, private members are no longer inherited. Thus, this does not type check: class Foo[T] { private val bar: T = ???; new Foo[String] { bar: String } }, as the bar in bar: String refers to the bar with type T. The Foo[String]’s bar is not inherited, and thus not in scope, in the refinement. (Example from SI-8371, see also SI-8426.) The following changes were made after a deprecation cycle (Thank you, @soc, for leading the deprecation effort!) • SI-6809 Case classes without a parameter list are no longer allowed. • SI-7618 Octal number literals no longer supported. Finally, some notable improvements and bug fixes: • SI-7296 Case classes with > 22 parameters are now allowed. • SI-3346 Implicit arguments of implicit conversions now guide type inference. • SI-6240 Thread safety of reflection API. • #3037 Experimental support for SAM synthesis. • #2848 Name-based pattern-matching. • SI-6169 Infer bounds of Java-defined existential types. • SI-6566 Right-hand sides of type aliases are now considered invariant for variance checking. • SI-5917 Improve public AST creation facilities. • SI-8063 Expose much needed methods in public reflection/macro API. • SI-8126 Add -Xsource option (make 2.11 type checker behave like 2.10 where possible). To catch future changes like this early, you can run the compiler under -Xfuture, which makes it behave like the next major version, where possible, to alert you to upcoming breaking changes. ### Deprecations Deprecation is essential to two of the 2.11.x series’ three themes (faster/smaller/stabler). They make the language and the libraries smaller, and thus easier to use and maintain, which ultimately improves stability. We are very proud of Scala’s first decade, which brought us to where we are, and we are actively working on minimizing the downsides of this legacy, as exemplified by 2.11.x’s focus on deprecation, modularization and infrastructure work. The following language “warts” have been deprecated: • SI-7605 Procedure syntax (only under -Xfuture). • SI-5479 DelayedInit. We will continue support for the important extends App idiom. We won’t drop DelayedInit until there’s a replacement for important use cases. (More details and a proposed alternative.) • SI-6455 Rewrite of .withFilter to .filter: you must implement withFilter to be compatible with for-comprehensions. • SI-8035 Automatic insertion of () on missing argument lists. • SI-6675 Auto-tupling in patterns. • SI-7247 NotNull, which was never fully implemented – slated for removal in 2.12. • SI-1503 Unsound type assumption for stable identifier and literal patterns. • SI-7629 View bounds (under -Xfuture). We’d like to emphasize the following library deprecations: • #3103, #3191, #3582 Collection classes and methods that are (very) difficult to extend safely have been slated for being marked final. Proxies and wrappers that were not adequately implemented or kept up-to-date have been deprecated, along with other minor inconsistencies. • scala-actors is now deprecated and will be removed in 2.12; please follow the steps in the Actors Migration Guide to port to Akka Actors • SI-7958 Deprecate scala.concurrent.future and scala.concurrent.promise • SI-3235 Deprecate round on Int and Long (#3581). • We are looking for maintainers to take over the following modules: scala-swing, scala-continuations. 2.12 will not include them if no new maintainer is found. We will likely keep maintaining the other modules (scala-xml, scala-parser-combinators), but help is still greatly appreciated. Deprecation is closely linked to source and binary compatibility. We say two versions are source compatible when they compile the same programs with the same results. Deprecation requires qualifying this statement: “assuming there are no deprecation warnings”. This is what allows us to evolve the Scala platform and keep it healthy. We move slowly to guarantee smooth upgrades, but we want to keep improving as well! ### Binary Compatibility When two versions of Scala are binary compatible, it is safe to compile your project on one Scala version and link against another Scala version at run time. Safe run-time linkage (only!) means that the JVM does not throw a (subclass of) LinkageError when executing your program in the mixed scenario, assuming that none arise when compiling and running on the same version of Scala. Concretely, this means you may have external dependencies on your run-time classpath that use a different version of Scala than the one you’re compiling with, as long as they’re binary compatibile. In other words, separate compilation on different binary compatible versions does not introduce problems compared to compiling and running everything on the same version of Scala. We check binary compatibility automatically with MiMa. We strive to maintain a similar invariant for the behavior (as opposed to just linkage) of the standard library, but this is not checked mechanically (Scala is not a proof assistant so this is out of reach for its type system). #### Forwards and Back We distinguish forwards and backwards compatibility (think of these as properties of a sequence of versions, not of an individual version). Maintaining backwards compatibility means code compiled on an older version will link with code compiled with newer ones. Forwards compatibility allows you to compile on new versions and run on older ones. Thus, backwards compatibility precludes the removal of (non-private) methods, as older versions could call them, not knowing they would be removed, whereas forwards compatibility disallows adding new (non-private) methods, because newer programs may come to depend on them, which would prevent them from running on older versions (private methods are exempted here as well, as their definition and call sites must be in the same compilation unit). These are strict constraints, but they have worked well for us in the Scala 2.10.x series. They didn’t stop us from fixing 372 issues in the 2.10.x series post 2.10.0. The advantages are clear, so we will maintain this policy in the 2.11.x series, and are looking (but not yet commiting!) to extend it to include major versions in the future. #### Meta Note that so far we’ve only talked about the jars generated by scalac for the standard library and reflection. Our policies do not extend to the meta-issue: ensuring binary compatibility for bytecode generated from identical sources, by different version of scalac? (The same problem exists for compiling on different JDKs.) While we strive to achieve this, it’s not something we can test in general. Notable examples where we know meta-binary compatibility is hard to achieve: specialisation and the optimizer. In short, if binary compatibility of your library is important to you, use MiMa to verify compatibility before releasing. Compiling identical sources with different versions of the scala compiler (or on different JVM versions!) could result in binary incompatible bytecode. This is rare, and we try to avoid it, but we can’t guarantee it will never happen. #### Concretely Just like the 2.10.x series, we guarantee forwards and backwards compatibility of the "org.scala-lang" % "scala-library" % "2.11.x" and "org.scala-lang" % "scala-reflect" % "2.11.x" artifacts, except for anything under the scala.reflect.internal package, as scala-reflect is still experimental. We also strongly discourage relying on the stability of scala.concurrent.impl and scala.reflect.runtime, though we will only break compatibility for severe bugs here. Note that we will only enforce backwards binary compatibility for the new modules (artifacts under the groupId org.scala-lang.modules). As they are opt-in, it’s less of a burden to require having the latest version on the classpath. (Without forward compatibility, the latest version of the artifact must be on the run-time classpath to avoid linkage errors.) Finally, Scala 2.11.0 introduces scala-library-all to aggregate the modules that constitute a Scala release. Note that this means it does not provide forward binary compatibility, whereas the core scala-library artifact does. We consider the versions of the modules that "scala-library-all" % "2.11.x" depends on to be the canonical ones, that are part of the official Scala distribution. (The distribution itself is defined by the new scala-dist maven artifact.) ### New features in the 2.11 series This release contains all of the bug fixes and improvements made in the 2.10 series, as well as: • Collections • Immutable HashMaps and HashSets perform faster filters, unions, and the like, with improved structural sharing (lower memory usage or churn). • Mutable LongMap and AnyRefMap have been added to provide improved performance when keys are Long or AnyRef (performance enhancement of up to 4x or 2x respectively). • BigDecimal is more explicit about rounding and numeric representations, and better handles very large values without exhausting memory (by avoiding unnecessary conversions to BigInt). • List has improved performance on map, flatMap, and collect. • See also Deprecation above: we have slated many classes and methods to become final, to clarify which classes are not meant to be subclassed and to facilitate future maintenance and performance improvements. • Modularization • The core Scala standard library jar has shed 20% of its bytecode. The modules for xml, parsing, swing as well as the (unsupported) continuations plugin and library are available individually or via scala-library-all. Note that this artifact has weaker binary compatibility guarantees than scala-library – as explained above. • The compiler has been modularized internally, to separate the presentation compiler, scaladoc and the REPL. We hope this will make it easier to contribute. In this release, all of these modules are still packaged in scala-compiler.jar. We plan to ship them in separate JARs in 2.12.x. • Reflection, macros and quasiquotes • Please see this detailed changelog that lists all significant changes and provides advice on forward and backward compatibility. • See also this summary of the experimental side of the 2.11 development cycle. • #3321 introduced Sprinter, a new AST pretty-printing library! Very useful for tools that deal with source code. • Back-end • The GenBCode back-end (experimental in 2.11). See @magarciaepfl’s extensive documentation. • A new experimental way of compiling closures, implemented by @JamesIry. With -Ydelambdafy:method anonymous functions are compiled faster, with a smaller bytecode footprint. This works by keeping the function body as a private (static, if no this reference is needed) method of the enclosing class, and at the last moment during compilation emitting a small anonymous class that extends FunctionN and delegates to it. This sets the scene for a smooth migration to Java 8-style lambdas (not yet implemented). • Branch elimination through constant analysis #2214 • Compiler Performance • Incremental compilation has been improved significantly. To try it out, upgrade to sbt 0.13.2-M2 and add incOptions := incOptions.value.withNameHashing(true) to your build! Other build tools are also supported. More info at this sbt issue – that’s where most of the work happened. More features are planned, e.g. class-based tracking. • We’ve been optimizing the batch compiler’s performance as well, and will continue to work on this during the 2.11.x cycle. • Improve performance of reflection SI-6638 • REPL • Warnings * Warn about unused private / local terms and types, and unused imports, under -Xlint. This will even tell you when a local var could be a val. • Slimming down the compiler • The experimental .NET backend has been removed from the compiler. • Scala 2.10 shipped with new implementations of the Pattern Matcher and the Bytecode Emitter. We have removed the old implementations. • Search and destroy mission for ~5000 chunks of dead code. #1648 ### License clarification Scala is now distributed under the standard 3-clause BSD license. Originally, the same 3-clause BSD license was adopted, but slightly reworded over the years, and the “Scala License” was born. We’re now back to the standard formulation to avoid confusion. #### A big thank you to all the contributors! #Author 68<notextile>Adriaan Moors</notextile> 40<notextile>Iain McGinniss</notextile> 9<notextile>Jason Zaugg</notextile> 7<notextile>Denys Shabalin</notextile> 5<notextile>Eugene Burmako</notextile> 5<notextile>Simon Ochsenreither</notextile> 4<notextile>A. P. Marki</notextile> 1<notextile>Grzegorz Kossakowski</notextile> 1<notextile>François Garillot</notextile> #### Commits and the issues they fixed since v2.11.0-RC3 Issue(s)CommitMessage SI-84669fbac09<notextile>SI-8466 fix quasiquote crash on recursively iterable unlifting</notextile> SI-7291, SI-84601c330e6<notextile>SI-8460 Fix regression in divergent implicit recovery</notextile> SI-84605e795fc<notextile>Refactor handling of failures in implicit search</notextile> SI-605491fb5c0<notextile>SI-6054 Modern eta-expansion examples that almost run</notextile> SI-5610, SI-6069b3adae6<notextile>SI-6069 Preserve by-name during eta-expansion</notextile> SI-60543fb5acc<notextile>SI-6054 don't use the defunct List.map2 in example</notextile> SI-513671e45e0<notextile>SI-5136 correct return type for unapplySeq</notextile> SI-6195aa6e4b3<notextile>SI-6195 stable members can only be overridden by stable members</notextile> SI-56051921528<notextile>SI-5605 case class equals only considers first param section</notextile> SI-605451f3ac1<notextile>SI-6054 correct eta-expansion in method value using placeholder syntax</notextile> SI-51553c0d964<notextile>SI-5155 xml patterns do not support cdata, entity refs or comments</notextile> SI-508984bba26<notextile>SI-5089 update definition implicit scope in terms of parts of a type</notextile> SI-7313227e11d<notextile>SI-7313 method types of implicit and non-implicit parameter sections are never e</notextile> SI-76727be2a6c<notextile>SI-7672 explicit top-level import of Predef precludes implicit one</notextile> SI-5370aa64187<notextile>SI-5370 PartialFunction is a Function with queryable domain</notextile> SI-49804615ec5<notextile>SI-4980 isInstanceOf does not do outer checks</notextile> SI-1972f0b37c2<notextile>SI-1972 clarify getter and setter must be declared together</notextile> SI-50865135bae<notextile>SI-5086 clean up EBNF</notextile> SI-506532e0943<notextile>SI-5065 val/var is optional for a constructor parameter</notextile> SI-520964b7338<notextile>SI-5209 correct precedence of infix operators starting with ! =</notextile> SI-4249e197cf8<notextile>SI-4249 try/catch accepts expression</notextile> SI-7937d614228<notextile>SI-7937 In for, semi before guard never required</notextile> SI-458319ab789<notextile>SI-4583 UnicodeEscape does not allow multiple backslashes</notextile> SI-83880bac64d<notextile>SI-8388 consistently match type trees by originals</notextile> SI-8387f10d754<notextile>SI-8387 don't match new as a function application</notextile> SI-83502fea950<notextile>SI-8350 treat single parens equivalently to no-parens in new</notextile> SI-8451a0c3bbd<notextile>SI-8451 quasiquotes now handle quirks of secondary constructors</notextile> SI-84379326264<notextile>SI-8437 macro runtime now also picks inherited macro implementations</notextile> SI-84115e23a6a<notextile>SI-8411 match desugared partial functions</notextile> SI-8200fa91b17<notextile>SI-8200 provide an identity liftable for trees</notextile> SI-79025f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile> SI-82058ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile> SI-8126, SI-6566806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile> SI-8146, SI-8146, SI-8146, SI-8146ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile> SI-8420b6a54a8<notextile>SI-8420 don't crash on unquoting of non-liftable native type</notextile> SI-8428aa1e1d0<notextile>SI-8428 Refactor ConcatIterator</notextile> SI-84281fa46a5<notextile>SI-8428 Fix regression in iterator concatenation</notextile> #### Complete commit list! shaTitle 2ba0453<notextile>Further tweak version of continuations plugin in scala-dist.pom</notextile> 9fbac09<notextile>SI-8466 fix quasiquote crash on recursively iterable unlifting</notextile> afccae6<notextile>Refactor rankImplicits, add some more docs</notextile> d345424<notextile>Refactor: keep DivergentImplicitRecovery logic together.</notextile> 1c330e6<notextile>SI-8460 Fix regression in divergent implicit recovery</notextile> 5e795fc<notextile>Refactor handling of failures in implicit search</notextile> 8489be1<notextile>Rebase #3665</notextile> 63783f5<notextile>Disable more of the Travis spec build for PR validation</notextile> 9cc0911<notextile>Minor typographical fixes for lexical syntax chapter</notextile> f40d63a<notextile>Don't mention C#</notextile> bb2a952<notextile>Reducing overlap of code and math.</notextile> 3a75252<notextile>Simplify CSS, bigger monospace to match math</notextile> 91fb5c0<notextile>SI-6054 Modern eta-expansion examples that almost run</notextile> b3adae6<notextile>SI-6069 Preserve by-name during eta-expansion</notextile> a89157f<notextile>Stubs for references chapter, remains TODO</notextile> 0b48dc2<notextile>Number files like chapters. Consolidate toc & preface.</notextile> 0f1dcc4<notextile>Minor cleanup in aisle README</notextile> 6ec6990<notextile>Skip step bound to fail in Travis PR validation</notextile> 12720e6<notextile>Remove scala-continuations-plugin from scala-library-all</notextile> 3560ddc<notextile>Start ssh-agent</notextile> b102ffc<notextile>Disable strict host checking</notextile> 0261598<notextile>Jekyll generated html in spec/ directory</notextile> 71c1716<notextile>Add language to code blocks. Shorter Example title.</notextile> abd0895<notextile>Fix #6: automatic section numbering.</notextile> 5997e32<notextile>#9 try to avoid double slashes in url</notextile> 09f2a26<notextile>require redcarpet 3.1 for user-friendly anchors</notextile> f16ab43<notextile>use simple quotes, fix indent, escape dollar</notextile> 5629529<notextile>liquid requires SSA?</notextile> 128c5e8<notextile>sort pages in index</notextile> 8dba297<notextile>base url</notextile> 3df5773<notextile>formatting</notextile> 7307a03<notextile>TODO: number headings using css</notextile> 617bdf8<notextile>mathjax escape dollar</notextile> a1275c4<notextile>TODO: binding example</notextile> c61f554<notextile>fix indentation for footnotes</notextile> 52898fa<notextile>allow math in code</notextile> 827f5f6<notextile>redcarpet</notextile> 0bc3ec9<notextile>formatting</notextile> 2f3d0fd<notextile>Jekyll 2 config for redcarpet 3.1.1</notextile> e6ecfd0<notextile>That was fun: fix internal links.</notextile> d8a09e2<notextile>formatting</notextile> 9c757bb<notextile>fix some links</notextile> 453625e<notextile>wip: jekyllify</notextile> 3fb5acc<notextile>SI-6054 don't use the defunct List.map2 in example</notextile> 71e45e0<notextile>SI-5136 correct return type for unapplySeq</notextile> aa6e4b3<notextile>SI-6195 stable members can only be overridden by stable members</notextile> 1921528<notextile>SI-5605 case class equals only considers first param section</notextile> 51f3ac1<notextile>SI-6054 correct eta-expansion in method value using placeholder syntax</notextile> 78d96ea<notextile>formatting: tables and headings</notextile> 3c0d964<notextile>SI-5155 xml patterns do not support cdata, entity refs or comments</notextile> 84bba26<notextile>SI-5089 update definition implicit scope in terms of parts of a type</notextile> 227e11d<notextile>SI-7313 method types of implicit and non-implicit parameter sections are never e</notextile> 7be2a6c<notextile>SI-7672 explicit top-level import of Predef precludes implicit one</notextile> aa64187<notextile>SI-5370 PartialFunction is a Function with queryable domain</notextile> 4615ec5<notextile>SI-4980 isInstanceOf does not do outer checks</notextile> f0b37c2<notextile>SI-1972 clarify getter and setter must be declared together</notextile> 5135bae<notextile>SI-5086 clean up EBNF</notextile> 32e0943<notextile>SI-5065 val/var is optional for a constructor parameter</notextile> 64b7338<notextile>SI-5209 correct precedence of infix operators starting with ! =</notextile> 1130d10<notextile>formatting</notextile> e197cf8<notextile>SI-4249 try/catch accepts expression</notextile> 622ffd4<notextile>wip</notextile> d614228<notextile>SI-7937 In for, semi before guard never required</notextile> 507e58b<notextile>github markdown: tables</notextile> 09c957b<notextile>github markdown: use ###### for definitions and notes</notextile> 9fb8276<notextile>github markdown: use ###### for examples</notextile> 19ab789<notextile>SI-4583 UnicodeEscape does not allow multiple backslashes</notextile> 1ca2095<notextile>formatting</notextile> b75812d<notextile>Mention WIP in README</notextile> 9031467<notextile>Catch up with latex spec.</notextile> 21ca2cf<notextile>convert {\em } to _..._</notextile> 37ef8a2<notextile>github markdown: numbered definition</notextile> b44c598<notextile>github markdown: code blocks</notextile> 9dec37b<notextile>github markdown: drop css classes</notextile> df2f3f7<notextile>github markdown: headers</notextile> 839fd6e<notextile>github markdown: numbered lists</notextile> fa4aba5<notextile>new build options</notextile> b71a2c1<notextile>updated README.md</notextile> d8f0a93<notextile>rendering error fix</notextile> ab8f966<notextile>added tex source build</notextile> a80a894<notextile>Typographical adjustments</notextile> 34eb920<notextile>Fix fonts to enable both old-style and lining numerals</notextile> 8f1bd7f<notextile>Over-wide line fix for types grammar</notextile> 9cee383<notextile>Replaced build script with make file</notextile> 3f339c8<notextile>Minor pagination tweak</notextile> 50ce322<notextile>Miscellaneous cleanups:</notextile> 2311e34<notextile>fix poorly labeled section links fix over-wide grammar text</notextile> e7ade69<notextile>Use the original type faces</notextile> 2c21733<notextile>Adjust layout</notextile> 54273a3<notextile>set Luxi Mono and Heuristica (Utopia) as the default fonts for monospace and mai</notextile> 1352994<notextile>use \sigma instead of raw unicode character in math mode, as it does not render </notextile> 7691d7f<notextile>added build step for ebook</notextile> 9a8134a<notextile>PDF building now working with pandoc 1.10.1</notextile> ab50eec<notextile>using standard markdown for numbered lists (hopefully better rendering on github</notextile> 94198c7<notextile>fixed reference to class diagram fixed undefined macro</notextile> ea177a2<notextile>fixed inline code block</notextile> cdaeb84<notextile>fixed inline code blocks fixed math array for PDF output</notextile> 1ec5965<notextile>fixed inline code blocks removed LaTeX labels converted TODOs to comments</notextile> 3404f54<notextile>fix for undefined macro</notextile> 990d4f0<notextile>fixed undefined macros and converted comment block</notextile> 580d5d6<notextile>fix for unicode character conversion error in producing PDF fix for grammar code</notextile> 1847283<notextile>standard library chapter converted</notextile> 7066c70<notextile>converted xml expressions and user defined annotations chapters</notextile> dc958b2<notextile>fixed minor math layout and unsupported commands</notextile> 2f67c76<notextile>Converted pattern matching chapter</notextile> a327584<notextile>Implicit Parameters and Values chapter converted</notextile> a368e9c<notextile>expressions chapter converted, some math-mode errors still exist</notextile> fd283b6<notextile>conversion of classes and objects chapter</notextile> 79833dc<notextile>converted syntax summary</notextile> b871ec6<notextile>basic declarations and definitions chapter converted, needs second-pass review.</notextile> bb53357<notextile>types chapter fully converted. Added link to jquery and some experimental code f</notextile> 3340862<notextile>accidentally committed OS resource</notextile> eb3e02a<notextile>MathJAX configuration for inline math inside code blocks</notextile> a805b04<notextile>interim commit of conversion of types chapter</notextile> 7d50d8f<notextile>- Grouping of text for examples in Lexical Syntax chapter fixed - Style of examp</notextile> f938a7c<notextile>Identifiers, Names and Scopes chapter converted. Minor CSS tweaks to make exampl</notextile> 7c16776<notextile>removed some stray LaTeX commands from Lexical Syntax chapter, and a back-refere</notextile> 82435f1<notextile>experimental restyling of examples to try and look a bit more like the original </notextile> 4f86c27<notextile>fixed missing newline between example text and delimited code expression</notextile> 5e2a788<notextile>preface and lexical syntax chapter converted, other chapters split into their ow</notextile> 0bac64d<notextile>SI-8388 consistently match type trees by originals</notextile> f10d754<notextile>SI-8387 don't match new as a function application</notextile> 2fea950<notextile>SI-8350 treat single parens equivalently to no-parens in new</notextile> a0c3bbd<notextile>SI-8451 quasiquotes now handle quirks of secondary constructors</notextile> 9326264<notextile>SI-8437 macro runtime now also picks inherited macro implementations</notextile> 5e23a6a<notextile>SI-8411 match desugared partial functions</notextile> f9a5880<notextile>introduces Mirror.typeOf</notextile> fa91b17<notextile>SI-8200 provide an identity liftable for trees</notextile> db300d4<notextile>[backport] no longer warns on calls to vampire macros</notextile> a16e003<notextile>Bump version to 2.10.5 for nightly builds.</notextile> 5f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile> 8ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile> d167f14<notextile>[nomaster] corrects an error in reify’s documentation</notextile> 806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile> ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile> cbb88ac<notextile>[nomaster] Update MiMa and use new wildcard filter</notextile> b6a54a8<notextile>SI-8420 don't crash on unquoting of non-liftable native type</notextile> aa1e1d0<notextile>SI-8428 Refactor ConcatIterator</notextile> 1fa46a5<notextile>SI-8428 Fix regression in iterator concatenation</notextile> ff02fda<notextile>Bump versions for 2.11.0-RC3</notextile> ## April 06, 2014 ### Ruminations of a Programmer #### Functional Patterns in Domain Modeling - Immutable Aggregates and Functional Updates In the last post I looked at a pattern that enforces constraints to ensure domain objects honor the domain rules. But what exactly is a domain object ? What should be the granularity of an object that my solution model should expose so that it makes sense to a domain user ? After all, the domain model should speak the language of the domain. We may have a cluster of entities modeling various concepts of the domain. But only some of them can be published as abstractions to the user of the model. The others can be treated as implementation artifacts and are best hidden under the covers of the published ones. An aggregate in domain driven design is a published abstraction that provides a single point of interaction to a specific domain concept. Considering the classes I introduced in the last post, an Order is an aggregate. It encapsulates the details that an Order is composed of in the real world (well, only barely in this example, which is only for illustration purposes :-)). Note an aggregate can consist of other aggregates - e.g. we have a Customer instance within an Order. Eric Evans in his book on Domain Driven Design provides an excellent discussion of what constitutes an Aggregate. # Functional Updates of Aggregates with Lens This is not a post about Aggregates and how they fit in the realm of domain driven design. In this post I will talk about how to use some patterns to build immutable aggregates. Immutable data structures offer a lot of advantages, so build your aggregates ground up as immutable objects. Algebraic Data Type (ADT) is one of the patterns to build immutable aggregate objects. Primarily coming from the domain of functional programming, ADTs offer powerful techniques of pattern matching that help you match values against patterns and bind variables to successful matches. In Scala we use case classes as ADTs that give immutable objects out of the box .. case class Order(orderNo: String, orderDate: Date, customer: Customer, lineItems: Vector[LineItem], shipTo: ShipTo, netOrderValue: Option[BigDecimal] = None, status: OrderStatus = Placed) Like all good aggregates, we need to provide a single point of interaction to users. Of course we can access all properties using accessors of case classes. But what about updates ? We can update the orderNo of an order like this .. val o = Order( .. )o.copy(orderNo = newOrderNo) which gives us a copy of the original order with the new order no. We don't mutate the original order. But anybody having some knowledge of Scala will realize that this becomes pretty clunky when we have to deal with nested object updation. e.g in the above case, ShipTo is defined as follows .. case class Address(number: String, street: String, city: String, zip: String)case class ShipTo(name: String, address: Address) So, here you go in order to update the zip code of a ShipTo .. val s = ShipTo("abc", Address("123", "Monroe Street", "Denver", "80233"))s.copy(address = s.address.copy(zip = "80231")) Not really pleasing and can go off bounds in comprehensibility pretty soon. In our domain model we use an abstraction called a Lens for updating Aggregates. In very layman's terms, a lens is an encapsulated get and set combination. The get extracts a small part from a larger whole, while the set transforms the larger abstraction with a smaller part taken as a parameter. case class Lens[A, B](get: A => B, set: (A, B) => A) This is a naive definition of a Lens in Scala. Sophisticated lens designs go a long way to ensure proper abstraction and composition. scalaz provides one such implementation out of the box that exploits the similarity in structure between the get and the set to generalize the lens definition in terms of another abstraction named Store. As it happens so often in functional programming, Store happens to abstract yet another pattern called the Comonad. You can think of a Comonad as the inverse of a Monad. But in case you are more curious, and have wondered how lenses form "the Coalgebras for the Store Comonad", have a look at the 2 papers here and here. Anyway for us mere domain modelers, we will use the Lens implementation as in scalaz .. here's a lens that helps us update the OrderStatus within an Order .. val orderStatus = Lens.lensu[Order, OrderStatus] ( (o, value) => o.copy(status = value), _.status) and use it as follows .. val o = Order( .. )orderStatus.set(o, Placed) will change the status field of the Order to Placed. Let's have a look at some of the compositional properties of a lens which help us write readable code for functionally updating nested structures. # Composition of Lenses First let's define some individual lenses .. // lens for updating a ShipTo of an Orderval orderShipTo = Lens.lensu[Order, ShipTo] ( (o, sh) => o.copy(shipTo = sh), _.shipTo)// lens for updating an address of a ShipToval shipToAddress = Lens.lensu[ShipTo, Address] ( (sh, add) => sh.copy(address = add), _.address)// lens for updating a city of an addressval addressToCity = Lens.lensu[Address, String] ( (add, c) => add.copy(city = c), _.city) And now we compose them to define a lens that directly updates the city of a ShipTo belonging to an Order .. // compositionality FTWdef orderShipToCity = orderShipTo andThen shipToAddress andThen addressToCity Now updating a city of a ShipTo in an Order is as simple and expressive as .. val o = Order( .. )orderShipToCity.set(o, "London") The best part of using such compositional data structures is that it makes your domain model implementation readable and expressive to the users of your API. And yet your aggregate remains immutable. Let's look at another use case when the nested object is a collection. scalaz offers partial lenses that you can use for such composition. Here's an example where we build a lens that updates the value member within a LineItem of an Order. A LineItem is defined as .. case class LineItem(item: Item, quantity: BigDecimal, value: Option[BigDecimal] = None, discount: Option[BigDecimal] = None) and an Order has a collection of LineItems. Let's define a lens that updates the value within a LineItem .. val lineItemValue = Lens.lensu[LineItem, Option[BigDecimal]] ( (l, v) => l.copy(value = v), _.value) and then compose it with a partial lens that helps us update a specific item within a vector. Note how we convert our lineItemValue lens to a partial lens using the unary operator ~ .. // a lens that updates the value in a specific LineItem within an Order def lineItemValues(i: Int) = ~lineItemValue compose vectorNthPLens(i) Now we can use this composite lens to functionally update the value field of each of the items in a Vector of LineItems using some specific business rules .. (0 to lis.length - 1).foldLeft(lis) {(s, i) => val li = lis(i) lineItemValues(i).set(s, unitPrice(li.item).map(_ * li.quantity)).getOrElse(s)} In this post we saw how we can handle aggregates functionally and without any in-place mutation. This keeps the model pure and helps us implement domain models that has sane behavior even in concurrent settings without any explicit use of locks and semaphores. In the next post we will take a look at how we can use such compositional structures to make the domain model speak the ubiquitous language of the domain - another pattern recommended by Eric Evans in domain driven design. ## April 04, 2014 ### Quoi qu'il en soit #### Becoming really rich with Scala Update: 31 March 2014: This article uses Scala 2.8 • Updated version of this code using scala 2.10 available in gtihub • git clone https://github.com/azzoti/get-rich-with-scala.git • Its an eclipse scala ide project and a maven project • Can be run with: • mvn scala:run -DmainClass=etf.analyzer.Program Becoming really rich with C# was a great example of what's coming in the version of C# in Visual Studio 2010 and it makes it obvious that C# is leaving Java in the dust. Now that the Visual Studio 2010 beta 2 is available, you can download Luca's code and try it out. For this post, I've translated the C# code into Scala while trying to preserve the C# original style. To do this I have added support code in order to match the C# style. This was easy and shows off Scala's extensibility. The features/libraries or libraries added or used to do this are: • Scala-Time: A Java Joda Time library wrapper. • The "using" block from Martin Odersky's FOSDEM '09 talk • An EventHandler class for simulating C# Events • The Jetty HTTPClient from Eclipse that I wrapped to resemble the C# WebClient api. • Artithmetic operations for Option[Double]. (Option[Double] is Scala's equivalent of the C# nullable type double? In C# you can use double? variables in expresssions, with expressions returning null if any part of the expression is null. In Scala, you can't use Option[Double] in arithmetic expressions out of the box, but its very easy to add this ability in a small library. • The Scala code is written with the latest 2.8 pre version of Scala and uses one or two features from its latest standard library not present in the latest stable release. While the Scala is slighly shorter than the C# code, it is supported by extra code or libraries that I have found or had to write. C# already has using blocks, Events and reasonable datetime management, a WebClient and Nullable double types that handle arithmentic operations sensibly. For whatever reason, the Scala code runs much faster than the c# code, but there is a large amount of internet access involved and I suspect that the C# web client should be configured to use more threads. [Update: Luca just suggested I comment out the C# line ServicePointManager.DefaultConnectionLimit = 10; and this does indeed make the C# code much faster.] Original C#Scala See notes after the table  using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Net;using System.Threading;using System.Threading.Tasks;using System.IO;namespace ETFAnalyzer {struct Event { internal Event(DateTime date, double price) { Date = date; Price = price; } internal readonly DateTime Date; internal readonly double Price;}class Summary { internal Summary(string ticker, string name, string assetClass, string assetSubClass, double? weekly, double? fourWeeks, double? threeMonths, double? sixMonths, double? oneYear, double? stdDev, double price, double? mav200) { Ticker = ticker; Name = name; AssetClass = assetClass; AssetSubClass = assetSubClass; // Abracadabra ... LRS = (fourWeeks + threeMonths + sixMonths + oneYear) / 4; Weekly = weekly; FourWeeks = fourWeeks; ThreeMonths = threeMonths; SixMonths = sixMonths; OneYear = oneYear; StdDev = stdDev; Mav200 = mav200; Price = price; } internal readonly string Ticker; internal readonly string Name; internal readonly string AssetClass; internal readonly string AssetSubClass; internal readonly double? LRS; internal readonly double? Weekly; internal readonly double? FourWeeks; internal readonly double? ThreeMonths; internal readonly double? SixMonths; internal readonly double? OneYear; internal readonly double? StdDev; internal readonly double? Mav200; internal double Price; internal static void Banner() { Console.Write("{0,-6}", "Ticker"); Console.Write("{0,-50}", "Name"); Console.Write("{0,-12}", "Asset Class"); Console.Write("{0,4}", "RS"); Console.Write("{0,4}", "1Wk"); Console.Write("{0,4}", "4Wk"); Console.Write("{0,4}", "3Ms"); Console.Write("{0,4}", "6Ms"); Console.Write("{0,4}", "1Yr"); Console.Write("{0,6}", "Vol"); Console.WriteLine("{0,2}", "Mv"); } internal void Print() { Console.Write("{0,-6}", Ticker); Console.Write("{0,-50}", new String(Name.Take(48).ToArray())); Console.Write("{0,-12}", new String(AssetClass.Take(10).ToArray())); Console.Write("{0,4:N0}", LRS * 100); Console.Write("{0,4:N0}", Weekly * 100); Console.Write("{0,4:N0}", FourWeeks * 100); Console.Write("{0,4:N0}", ThreeMonths * 100); Console.Write("{0,4:N0}", SixMonths * 100); Console.Write("{0,4:N0}", OneYear * 100); Console.Write("{0,6:N0}", StdDev * 100); if (Price <= Mav200) Console.WriteLine("{0,2}", "X"); else Console.WriteLine(); }}class TimeSeries { internal readonly string Ticker; readonly DateTime _start; readonly Dictionary<DateTime, double> _adjDictionary; readonly string _name; readonly string _assetClass; readonly string _assetSubClass; internal TimeSeries(string ticker, string name, string assetClass, string assetSubClass, IEnumerable<event> events) { Ticker = ticker; _name = name; _assetClass = assetClass; _assetSubClass = assetSubClass; _start = events.Last().Date; _adjDictionary = events.ToDictionary(e => e.Date, e => e.Price); } bool GetPrice(DateTime when, out double price, out double shift) { // To nullify the effect of hours/min/sec/millisec being different from 0 when = new DateTime(when.Year, when.Month, when.Day); var found = false; shift = 1; double aPrice = 0; while (when >= _start && !found) { if (_adjDictionary.TryGetValue(when, out aPrice)) { found = true; } when = when.AddDays(-1); shift -= 1; } price = aPrice; return found; } double? GetReturn(DateTime start, DateTime end) { var startPrice = 0.0; var endPrice = 0.0; var shift = 0.0; var foundEnd = GetPrice(end, out endPrice, out shift); var foundStart = GetPrice(start.AddDays(shift), out startPrice, out shift); if (!foundStart || !foundEnd) return null; else return endPrice / startPrice - 1; } internal double? LastWeekReturn() { return GetReturn(DateTime.Now.AddDays(-7), DateTime.Now); } internal double? Last4WeeksReturn() { return GetReturn(DateTime.Now.AddDays(-28), DateTime.Now); } internal double? Last3MonthsReturn() { return GetReturn(DateTime.Now.AddMonths(-3), DateTime.Now); } internal double? Last6MonthsReturn() { return GetReturn(DateTime.Now.AddMonths(-6), DateTime.Now); } internal double? LastYearReturn() { return GetReturn(DateTime.Now.AddYears(-1), DateTime.Now); } internal double? StdDev() { var now = DateTime.Now; now = new DateTime(now.Year, now.Month, now.Day); var limit = now.AddYears(-3); var rets = new List<double>(); while (now >= _start.AddDays(12) && now >= limit) { var ret = GetReturn(now.AddDays(-7), now); rets.Add(ret.Value); now = now.AddDays(-7); } var mean = rets.Average(); var variance = rets.Select(r => Math.Pow(r - mean, 2)).Sum(); var weeklyStdDev = Math.Sqrt(variance / rets.Count); return weeklyStdDev * Math.Sqrt(40); } internal double? MAV200() { return _adjDictionary.ToList() .OrderByDescending(k => k.Key) .Take(200).Average(k => k.Value); } internal double TodayPrice() { var price = 0.0; var shift = 0.0; GetPrice(DateTime.Now, out price, out shift); return price; } internal Summary GetSummary() { return new Summary(Ticker, _name, _assetClass, _assetSubClass, LastWeekReturn(), Last4WeeksReturn(), Last3MonthsReturn(), Last6MonthsReturn(), LastYearReturn(), StdDev(), TodayPrice(), MAV200()); }}class Program { static string CreateUrl(string ticker, DateTime start, DateTime end) { return @"http://ichart.finance.yahoo.com/table.csv?s=" + ticker + "&a="+(start.Month - 1).ToString()+"&b="+start.Day.ToString()+"&c="+start.Year.ToString() + "&d="+(end.Month - 1).ToString()+"&e="+end.Day.ToString()+"&f="+end.Year.ToString() + "&g=d&ignore=.csv"; } static void Main(string[] args) { // If you rise this above 5 you tend to get frequent connection closing on my machine // I'm not sure if it is msft network or yahoo web service ServicePointManager.DefaultConnectionLimit = 10; var tickers = File.ReadAllLines("ETFTest.csv") .Skip(1) .Select(l => l.Split(new[] { ',' })) .Where(v => v[2] != "Leveraged") .Select(values => Tuple.Create(values[0], values[1], values[2], values[3])) .ToArray(); var len = tickers.Length; var start = DateTime.Now.AddYears(-2); var end = DateTime.Now; var cevent = new CountdownEvent(len); var summaries = new Summary[len]; for(var i = 0; i < len; i++) { var t = tickers[i]; var url = CreateUrl(t.Item1, start, end); using (var webClient = new WebClient()) { webClient.DownloadStringCompleted += new DownloadStringCompletedEventHandler(downloadStringCompleted); webClient.DownloadStringAsync(new Uri(url), Tuple.Create(t, cevent, summaries, i)); } } cevent.Wait(); Console.WriteLine("\n"); var top15perc = summaries .Where(s => s.LRS.HasValue) .OrderByDescending(s => s.LRS) .Take((int)(len * 0.15)); var bottom15perc = summaries .Where(s => s.LRS.HasValue) .OrderBy(s => s.LRS) .Take((int)(len * 0.15)); Console.WriteLine(); Summary.Banner(); Console.WriteLine("TOP 15%"); foreach(var s in top15perc) s.Print(); Console.WriteLine(); Console.WriteLine("Bottom 15%"); foreach (var s in bottom15perc) s.Print(); } static void downloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { var bigTuple = (Tuple<Tuple<string, string, string, string>, CountdownEvent, Summary[], int>)e.UserState; var tuple = bigTuple.Item1; var cevent = bigTuple.Item2; var summaries = bigTuple.Item3; var i = bigTuple.Item4; var ticker = tuple.Item1; var name = tuple.Item2; var asset = tuple.Item3; var subAsset = tuple.Item4; if (e.Error == null) { var adjustedPrices = e.Result .Split(new[] { '\n' }) .Skip(1) .Select(l => l.Split(new[] { ',' })) .Where(l => l.Length == 7) .Select(v => new Event(DateTime.Parse(v[0]), Double.Parse(v[6]))); var timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices); summaries[i] = timeSeries.GetSummary(); cevent.Signal(); Console.Write("{0} ", ticker); } else { Console.WriteLine("[{0} ERROR] ", ticker); summaries[i] = new Summary(ticker,name,"ERROR","ERROR",0,0,0,0,0,0,0,0); cevent.Signal(); } }}}  package etf.analyzerimport scala.io.Sourceimport org.scala_tools.time.Imports._import org.scala_tools.option.math.Imports._ import org.joda.time.Daysimport org.scala_tools.using.Usingimport org.scala_tools.web.WebClientimport org.scala_tools.web.WebClientConnectionsimport org.scala_tools.web.DownloadStringCompletedEventArgs import java.io.Fileimport java.util.concurrent.CountDownLatch case class Event (date : DateTime, price : Double) {}case class Summary ( ticker : String, name : String, assetClass : String, assetSubClass : String, weekly : Option[Double], fourWeeks : Option[Double], threeMonths : Option[Double], sixMonths : Option[Double], oneYear : Option[Double], stdDev : Double, price : Double, mav200 : Double ) { // Abracadabra ... val LRS = (fourWeeks + threeMonths + sixMonths+ oneYear) / 4 def banner() = { printf("%-6s", "Ticker") printf("%-50s", "Name") printf("%-12s", "Asset Class") printf("%4s", "RS") printf("%4s", "1Wk") printf("%4s", "4Wk") printf("%4s", "3Ms") printf("%4s", "6Ms") printf("%4s", "1Yr") printf("%6s", "Vol") printf("%2s\n", "Mv") } def print() = { printf("%-6s", ticker); printf("%-50s", new String(name.toArray.take(48))) printf("%-12s", new String(assetClass.toArray.take(10))); printf("%4.0f", LRS * 100 getOrElse null) printf("%4.0f", weekly * 100 getOrElse null) printf("%4.0f", fourWeeks * 100 getOrElse null) printf("%4.0f", threeMonths * 100 getOrElse null) printf("%4.0f", sixMonths * 100 getOrElse null) printf("%4.0f", oneYear * 100 getOrElse null) printf("%6.0f", stdDev * 100); if (price <= mav200) { printf("%2s\n", "X"); } else { println(); } } }case class TimeSeries ( ticker : String, name : String, assetClass : String, assetSubClass : String, private events : Iterable[Event]) { private val _adjDictionary : Map[DateTime, Double] = Map() ++ events.map(e => (e.date -> e.price)) private val _start = events.last.date // Add the sum and average function to all Iterables[Double] used locally private implicit def iterableWithSumAndAverage(c: Iterable[Double]) = new { def sum = c.foldLeft(0.0)(_ + _) def average = sum / c.size } def getPrice(whenp : DateTime) : Option[(Double,Int)] = { var when = new DateTime(whenp.year.get,whenp.month.get,whenp.day.get,0,0,0,0) var found = false var shift = 1 var aPrice = 0.0 while (when >= _start && !found) { if (_adjDictionary.contains(when)) { aPrice = _adjDictionary(when) found = true } when = when - 1.days shift -= 1 } // Either return the price and the shift or None if no price was found if (found) Some(aPrice,shift) else return None } def getReturn(start: DateTime, end: DateTime) : Option[Double] = { for { (endPrice,daysBefore) <- -="" .takewhile="" 1.0="" 1.years="" 28.days="" 3.months="" 3.years="" 6.months="" 7.days="" d="" dates="Iterator.iterate(today)(_" datetime.now="" daysbefore.days="" def="" double="{" end="" endprice="" getprice="" last3monthsreturn="getReturn(DateTime.now" last4weeksreturn="getReturn(DateTime.now" last6monthsreturn="getReturn(DateTime.now" lastweekreturn="getReturn(DateTime.now" lastyearreturn="getReturn(DateTime.now" limit="today" private="" start="" startprice="" stddev="" today="DateTime.now" val="" yield=""> d >= (_start + 12.days) && d >= limit) .toList val rets = dates.map(d => getReturn(d - 7.days, d).get) val mean = rets.average val variance = rets.map(r => Math.pow(r - mean, 2)).average val weeklyStdDev = Math.sqrt(variance); return weeklyStdDev * Math.sqrt(40); } def mav200(): Double = { return _adjDictionary.toList .sortWith((elem1, elem2) => elem1._1 >= elem2._1) .take(200).map(keyValue => keyValue._2).average } def todayPrice() : Double = { getPrice(DateTime.now) match { case None => 0.0 case Some((price,_)) => price } } def getSummary() = Summary(ticker, name, assetClass, assetSubClass, lastWeekReturn, last4WeeksReturn, last3MonthsReturn, last6MonthsReturn, lastYearReturn, stdDev, todayPrice, mav200)}object Program extends Using { def createUrl(ticker: String, start: DateTime, end: DateTime) : String = { return """http://ichart.finance.yahoo.com/table.csv?s=""" + ticker + "&a="+(start.month.get-1)+ "&b=" + start.day.get + "&c=" + start.year.get + "&d="+(end.month.get -1)+ "&e=" + end.day.get + "&f=" + end.year.get + "&g=d&ignore=.csv" } def main(args : Array[String]) : Unit = { val tickers = Source.fromFile(new File("ETFTest.csv")).getLines() .drop(1) .map(l => l.trim.split(',')) .filter(v => v(2) != "Leveraged") .map(values => (values(0),values(1),values(2),if (values.length==4) values(3) else "")) .toSeq.toArray val len = tickers.length; val start = DateTime.now - 2.years val end = DateTime.now val cevent = new CountDownLatch(len) val summaries = new Array[Summary](len) using(new WebClientConnections(connectionsPerAddress = 10, threadPool=10)) { webClientConnections => for (i <- .filter="" 0="" cevent.await="" cevent="" downloadstringcompleted="" end="" i="" len="" println="" s="" start="" summaries="" t="" top15perc="summaries" until="" url="" val="" webclient.downloadstringasync="" webclient.downloadstringcompleted="" webclient="webClientConnections.getWebClient"> s.LRS.isDefined) .sortWith((elem1, elem2) => elem1.LRS >= elem2.LRS) .take((len * 0.15).toInt) val bottom15perc = summaries .filter(s => s.LRS.isDefined) .sortWith((elem1, elem2) => elem1.LRS <= elem2.LRS) .take((len * 0.15).toInt) println summaries(0).banner() println("TOP 15%") for (s <- .drop="" .map="" .split="" 15="" :="" adjustedprices="e.result" array="" asset="" bigtuple="" bottom15perc="" cevent="" countdownlatch="" datetimeformat.forpattern="" def="" downloadstringcompleted="" downloadstringcompletedeventargs="" e.error="=" e="" for="" i="" if="" int="" l="" n="" name="" null="" ottom="" parse="" parsedatetime="" println="" s.print="" s="" string="" subasset="" summaries="" ticker="" top15perc="" ummary="" val="" yyyy-mm-dd=""> l.split(',')) .filter(l => l.length == 7) .map(v => Event(parse(v(0)),v(6).toDouble)) val timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices); summaries(i) = timeSeries.getSummary(); cevent.countDown() printf("%s ", ticker) } else { printf("[%s ERROR] \n", ticker) summaries(i) = Summary(ticker,name,"ERROR","ERROR",Some(0),Some(0),Some(0),Some(0),Some(0),0,0,0) cevent.countDown() } } } Notes TimeSeries getPrice method: Scala does not have output parameters on methods. It doesn't need them because the return type from a method can be a tuple and you can return as many values as you like. Also the method shown copies the C# style closely using loop variables. Another way of writing the same method in Scala making use of list functions is: def getPrice(when : DateTime) : Option[(Double,Int)] = { // Find the most recent day with a price starting from when, but don't go back past _start val latestDayWithPrice = Iterator.iterate(when)(_ - 1.days) .dropWhile (d=> !_adjDictionary.contains(d) && d >= _start ) .next if (_adjDictionary.contains(latestDayWithPrice)) { val shift = Days.daysBetween(when,latestDayWithPrice).getDays() val aPrice = _adjDictionary(latestDayWithPrice) Some((aPrice,shift)) } else { None }}  TimeSeries getReturn method: The 2 calls to getPrice() return an Option[a price, days offset]. Dealing with Option[...] in a for expression is an easy way of dealing with the possibilty of either Option[...] being None. If either getPrice() call returns None, then the yield will return a None as well. Another perhaps simpler to understand getReturn implementation is: def getReturn(start: DateTime, end: DateTime) : Option[Double] = { var endPriceDetails = getPrice(end) if (endPriceDetails == None) return None val (endPrice,daysBefore) = endPriceDetails.getOrElse(null) val startPriceDetails = getPrice(start + daysBefore.days) if (startPriceDetails == None) return None val (startPrice,_) = startPriceDetails.getOrElse(null) (endPrice / startPrice - 1.0) } TimeSeries mav200 method: The scala version is slightly harder work than .net 3.5 LINQ OrderByDescending method with key selector syntax: .OrderByDescending(k => k.Key). The Scala version has to say *how* to do it. The LINQ version says *what* is required. The same is true for the C# use of the average function, which uses a field selector. I'm not happy that I know whether the concurrent access to summaries array access in downloadStringCompleted is safe. It seems to work but I don't know if it is genuinely thread safe. I've just copied the C# code, which may have built-in thread safe array access. Some of the features of Scala that are shown here • Easy Java library interop. See use of CountDownLatch, Days, File. • Good old fashioned casting if you really need it. See asInstanceOf. • No semicolons • No need to use () for a function declaration with no parameters or a call to ut. See TimeSeries.getSummary(). (brackets recommended if there are side effects) • Type declarations are unnecessary except in method parameters, but can be declared explicityly if it aids readability. See _adjDictionary. • Named and default parameters. See WebClientConnections • Much less boilerplate with "case" classes providing automatic constructors, fields, toString, equals, hashcode. See Event, Summary, TimeSeries • Joda time wrapper so you can say "today - 3.years" • Pattern matching assignment "val ((ticker,name),cevent,summaries,i) = bigTuple" in downloadStringCompleted • "using" block for automatic resource closing. In the main method, the "using(new WebClientConnections(" block will close down the WebClientConnections thread pool at the end of the block. This is very similar to the C# "using" code. • local "implicit" function definitions allowing you to effectively add methods to existing classes in a tightly controlled and scoped way. (see def iterableWithSumAndAverage) • Pattern matching, switch on steroids. See todayPrice(). • Use of powerful list manipulation functions, such as Iterator.iterate, takeWhile to replace traditional state based loops. See iterate/dropWhile examples in stdDev() and in main(): drop, map, filter, sortWith, take. See the infamous foldLeft example at work in the sum function. ## April 02, 2014 ### Functional Jobs #### Server Game Developer at Quark Games (Full-time) Quark Games was established in 2008 with the mission to create hardcore games for the mobile and tablet platforms. By focusing on making high quality, innovative, and engaging games, we aim to redefine mobile and tablet gaming as it exists today. We seek to gather a group of individuals who are ambitious but humble professionals who are relentless in their pursuit of learning and sharing knowledge. We're looking for people who share our passion for games, aren’t afraid to try new and different things, and inspire and push each other to personal and professional success. As a Server Game Developer, you’ll be responsible for implementing server related game features. You’ll be working closely with the server team to create scalable infrastructure as well as the client team for feature integration. You’ll have to break out of your toolset to push boundaries on technology to deliver the most robust back end to our users. What you’ll do every day • Develop and maintain features and systems necessary for the game • Collaborate with team members to create and manage scalable architecture • Work closely with Client developers on feature integration • Solve real time problems at a large scale • Evaluate new technologies and products What you can bring to the role • Ability to get stuff done • Desire to learn new technologies and design patterns • Care about creating readable, reusable, well documented, and clean code • Passion for designing and building systems to scale • Excitement for building and playing games Bonus points for • Experience with a functional language (Erlang, Elixir, Haskell, Scala, Julia, Rust, etc..) • Experience with a concurrent language (Erlang, Elixir, Clojure, Go, Scala, etc..) • Being a polyglot programmer and having experience with a wide range of languages (Ruby, C#, and Objective-C) • Experience with database integration and management for NoSQL systems (Riak, Couchbase, Redis, etc...) • Experience with server operations, deployment, and with tools such as Chef or Puppet • Experience with system administration Get information on how to apply for this position. ## April 01, 2014 ### Francois Armand #### Where all the activity went? As you can see, the activity on that blog has been non existant for several years now. For the two visitor wondering, my main focus switch to my familly (2 sons, one other on its way), Normation (my company about devops, config management, etc: http://www.normation.com/ ) and of course Rudder (http://www.rudder-project.org/). I'm still doing a ton of Scala, and you can find some articles on our company blog (http://blog.normation.com/) or slides about presentation I gave, like the one on Scala + ZeroMQ for Scala.IO 2013. It's on slideshoare: http://fr.slideshare.net/normation/ And of course, there is my Github page: https://github.com/fanf/ or twitter: https://twitter.com/fanf42 Hope to see you on these other media! ### Quoi qu'il en soit #### Becoming really rich with Java 8 Disclaimer: the C#, Scala and Java 8 algorithms shown and referenced here implement a "momentum investing" algorithm. This is purely for computer language comparison purposes and should definitely not be taken as investment advice. In 2009, I saw this post Becoming really rich with C# showcasing the new features in C# 4.5 and was impressed with C# with its hybrid Object-Functional approach and collection APIs to give collection operations a SQL-like feel: var adjustedPrices = e.Result .Split(new[] { '\n' }) .Skip(1) .Select(l => l.Split(new[] { ',' })) .Where(l => l.Length == 7) .Select(v => new Event(DateTime.Parse(v[0]), Double.Parse(v[6]))); Now lets do that in 7 lines of code in Java 5, 6, or 7. Er no, sorry. At the time, I was learning Scala. So I translated Becoming really rich with C# into Scala and compared them side by side. Result: See http://quoiquilensoit.blogspot.com/2009/10/becoming-really-rich-with-scala.html The result surprised me. I thought C# held up pretty well overall. So, a full four years later, Oracle owns Java and Java8 is out with some of the same features that C# was offering in dot net 4.5 in 2010. There is obvious missing stuff that Java 8 still does not have: LINQ, Output parameters. Vars. Tuples. Optional/Nullable numerics. But I tried the same exercise, trying to keep in the spirit of the C# code. The code is on github: https://github.com/azzoti/get-rich-with-java8 git clone https://github.com/azzoti/get-rich-with-java8.git Its an eclipse maven project, but you can run straight from the command line with: mvn exec:java (Make sure you have JDK 8 set up!) Original C#Java 8 See notes after the table using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Net;using System.Threading;using System.Threading.Tasks;using System.IO;namespace ETFAnalyzer {struct Event { internal Event(DateTime date, double price) { Date = date; Price = price; } internal readonly DateTime Date; internal readonly double Price;}class Summary { internal Summary(string ticker, string name, string assetClass, string assetSubClass, double? weekly, double? fourWeeks, double? threeMonths, double? sixMonths, double? oneYear, double? stdDev, double price, double? mav200) { Ticker = ticker; Name = name; AssetClass = assetClass; AssetSubClass = assetSubClass; // Abracadabra ... LRS = (fourWeeks + threeMonths + sixMonths + oneYear) / 4; Weekly = weekly; FourWeeks = fourWeeks; ThreeMonths = threeMonths; SixMonths = sixMonths; OneYear = oneYear; StdDev = stdDev; Mav200 = mav200; Price = price; } internal readonly string Ticker; internal readonly string Name; internal readonly string AssetClass; internal readonly string AssetSubClass; internal readonly double? LRS; internal readonly double? Weekly; internal readonly double? FourWeeks; internal readonly double? ThreeMonths; internal readonly double? SixMonths; internal readonly double? OneYear; internal readonly double? StdDev; internal readonly double? Mav200; internal double Price; internal static void Banner() { Console.Write("{0,-6}", "Ticker"); Console.Write("{0,-50}", "Name"); Console.Write("{0,-12}", "Asset Class"); Console.Write("{0,4}", "RS"); Console.Write("{0,4}", "1Wk"); Console.Write("{0,4}", "4Wk"); Console.Write("{0,4}", "3Ms"); Console.Write("{0,4}", "6Ms"); Console.Write("{0,4}", "1Yr"); Console.Write("{0,6}", "Vol"); Console.WriteLine("{0,2}", "Mv"); } internal void Print() { Console.Write("{0,-6}", Ticker); Console.Write("{0,-50}", new String(Name.Take(48).ToArray())); Console.Write("{0,-12}", new String(AssetClass.Take(10).ToArray())); Console.Write("{0,4:N0}", LRS * 100); Console.Write("{0,4:N0}", Weekly * 100); Console.Write("{0,4:N0}", FourWeeks * 100); Console.Write("{0,4:N0}", ThreeMonths * 100); Console.Write("{0,4:N0}", SixMonths * 100); Console.Write("{0,4:N0}", OneYear * 100); Console.Write("{0,6:N0}", StdDev * 100); if (Price <= Mav200) Console.WriteLine("{0,2}", "X"); else Console.WriteLine(); }}class TimeSeries { internal readonly string Ticker; readonly DateTime _start; readonly Dictionary<DateTime, double> _adjDictionary; readonly string _name; readonly string _assetClass; readonly string _assetSubClass; internal TimeSeries(string ticker, string name, string assetClass, string assetSubClass, IEnumerable<event> events) { Ticker = ticker; _name = name; _assetClass = assetClass; _assetSubClass = assetSubClass; _start = events.Last().Date; _adjDictionary = events.ToDictionary(e => e.Date, e => e.Price); } bool GetPrice(DateTime when, out double price, out double shift) { // To nullify the effect of hours/min/sec/millisec being different from 0 when = new DateTime(when.Year, when.Month, when.Day); var found = false; shift = 1; double aPrice = 0; while (when >= _start && !found) { if (_adjDictionary.TryGetValue(when, out aPrice)) { found = true; } when = when.AddDays(-1); shift -= 1; } price = aPrice; return found; } double? GetReturn(DateTime start, DateTime end) { var startPrice = 0.0; var endPrice = 0.0; var shift = 0.0; var foundEnd = GetPrice(end, out endPrice, out shift); var foundStart = GetPrice(start.AddDays(shift), out startPrice, out shift); if (!foundStart || !foundEnd) return null; else return endPrice / startPrice - 1; } internal double? LastWeekReturn() { return GetReturn(DateTime.Now.AddDays(-7), DateTime.Now); } internal double? Last4WeeksReturn() { return GetReturn(DateTime.Now.AddDays(-28), DateTime.Now); } internal double? Last3MonthsReturn() { return GetReturn(DateTime.Now.AddMonths(-3), DateTime.Now); } internal double? Last6MonthsReturn() { return GetReturn(DateTime.Now.AddMonths(-6), DateTime.Now); } internal double? LastYearReturn() { return GetReturn(DateTime.Now.AddYears(-1), DateTime.Now); } internal double? StdDev() { var now = DateTime.Now; now = new DateTime(now.Year, now.Month, now.Day); var limit = now.AddYears(-3); var rets = new List<double>(); while (now >= _start.AddDays(12) && now >= limit) { var ret = GetReturn(now.AddDays(-7), now); rets.Add(ret.Value); now = now.AddDays(-7); } var mean = rets.Average(); var variance = rets.Select(r => Math.Pow(r - mean, 2)).Sum(); var weeklyStdDev = Math.Sqrt(variance / rets.Count); return weeklyStdDev * Math.Sqrt(40); } internal double? MAV200() { return _adjDictionary.ToList() .OrderByDescending(k => k.Key) .Take(200).Average(k => k.Value); } internal double TodayPrice() { var price = 0.0; var shift = 0.0; GetPrice(DateTime.Now, out price, out shift); return price; } internal Summary GetSummary() { return new Summary(Ticker, _name, _assetClass, _assetSubClass, LastWeekReturn(), Last4WeeksReturn(), Last3MonthsReturn(), Last6MonthsReturn(), LastYearReturn(), StdDev(), TodayPrice(), MAV200()); }}class Program { static string CreateUrl(string ticker, DateTime start, DateTime end) { return @"http://ichart.finance.yahoo.com/table.csv?s=" + ticker + "&a="+(start.Month - 1).ToString()+"&b="+start.Day.ToString()+"&c="+start.Year.ToString() + "&d="+(end.Month - 1).ToString()+"&e="+end.Day.ToString()+"&f="+end.Year.ToString() + "&g=d&ignore=.csv"; } static void Main(string[] args) { // If you rise this above 5 you tend to get frequent connection closing on my machine // I'm not sure if it is msft network or yahoo web service ServicePointManager.DefaultConnectionLimit = 10; var tickers = File.ReadAllLines("ETFTest.csv") .Skip(1) .Select(l => l.Split(new[] { ',' })) .Where(v => v[2] != "Leveraged") .Select(values => Tuple.Create(values[0], values[1], values[2], values[3])) .ToArray(); var len = tickers.Length; var start = DateTime.Now.AddYears(-2); var end = DateTime.Now; var cevent = new CountdownEvent(len); var summaries = new Summary[len]; for(var i = 0; i < len; i++) { var t = tickers[i]; var url = CreateUrl(t.Item1, start, end); using (var webClient = new WebClient()) { webClient.DownloadStringCompleted += new DownloadStringCompletedEventHandler(downloadStringCompleted); webClient.DownloadStringAsync(new Uri(url), Tuple.Create(t, cevent, summaries, i)); } } cevent.Wait(); Console.WriteLine("\n"); var top15perc = summaries .Where(s => s.LRS.HasValue) .OrderByDescending(s => s.LRS) .Take((int)(len * 0.15)); var bottom15perc = summaries .Where(s => s.LRS.HasValue) .OrderBy(s => s.LRS) .Take((int)(len * 0.15)); Console.WriteLine(); Summary.Banner(); Console.WriteLine("TOP 15%"); foreach(var s in top15perc) s.Print(); Console.WriteLine(); Console.WriteLine("Bottom 15%"); foreach (var s in bottom15perc) s.Print(); } static void downloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { var bigTuple = (Tuple<Tuple<string, string, string, string>, CountdownEvent, Summary[], int>)e.UserState; var tuple = bigTuple.Item1; var cevent = bigTuple.Item2; var summaries = bigTuple.Item3; var i = bigTuple.Item4; var ticker = tuple.Item1; var name = tuple.Item2; var asset = tuple.Item3; var subAsset = tuple.Item4; if (e.Error == null) { var adjustedPrices = e.Result .Split(new[] { '\n' }) .Skip(1) .Select(l => l.Split(new[] { ',' })) .Where(l => l.Length == 7) .Select(v => new Event(DateTime.Parse(v[0]), Double.Parse(v[6]))); var timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices); summaries[i] = timeSeries.GetSummary(); cevent.Signal(); Console.Write("{0} ", ticker); } else { Console.WriteLine("[{0} ERROR] ", ticker); summaries[i] = new Summary(ticker,name,"ERROR","ERROR",0,0,0,0,0,0,0,0); cevent.Signal(); } }}} package etf.analyzer;import static java.lang.System.out;import static java.util.Comparator.comparing;import static java.util.stream.Collectors.*;import java.io.IOException;import java.nio.file.*;import java.time.LocalDate;import java.time.format.DateTimeFormatter;import java.util.*;import java.util.Map.Entry;import java.util.concurrent.CountDownLatch;import java.util.stream.Stream;class Event { public Event(LocalDate date, double price) { this.date = date; this.price = price; } public LocalDate getDate() { return date; } public double getPrice() { return price; } private LocalDate date; private double price;}class Summary { public Summary(String ticker, String name, String assetClass, String assetSubClass, OptionalDouble weekly, OptionalDouble fourWeeks, OptionalDouble threeMonths, OptionalDouble sixMonths, OptionalDouble oneYear, OptionalDouble stdDev, double price, OptionalDouble mav200) { this.ticker = ticker; this.name = name; this.assetClass = assetClass; // this.assetSubClass = assetSubClass; // Abracadabra ... this.lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d)); this.weekly = weekly; this.fourWeeks = fourWeeks; this.threeMonths = threeMonths; this.sixMonths = sixMonths; this.oneYear = oneYear; this.stdDev = stdDev; this.mav200 = mav200; this.price = price; } private String ticker; private String name; private String assetClass; // private String assetSubClass; public OptionalDouble lrs; private OptionalDouble weekly; private OptionalDouble fourWeeks; private OptionalDouble threeMonths; private OptionalDouble sixMonths; private OptionalDouble oneYear; private OptionalDouble stdDev; private OptionalDouble mav200; private double price; static void banner() { out.printf("%-6s", "Ticker"); out.printf("%-50s", "Name"); out.printf("%-12s", "Asset Class"); out.printf("%4s", "RS"); out.printf("%4s", "1Wk"); out.printf("%4s", "4Wk"); out.printf("%4s", "3Ms"); out.printf("%4s", "6Ms"); out.printf("%4s", "1Yr"); out.printf("%6s", "Vol"); out.printf("%2s\n", "Mv"); } void print() { out.printf("%-6s", ticker); out.printf("%-50s", name); out.printf("%-12s", assetClass); out.printf("%4.0f", lrs.orElse(0.0d) * 100); out.printf("%4.0f", weekly.orElse(0.0d) * 100); out.printf("%4.0f", fourWeeks.orElse(0.0d) * 100); out.printf("%4.0f", threeMonths.orElse(0.0d) * 100); out.printf("%4.0f", sixMonths.orElse(0.0d) * 100); out.printf("%4.0f", oneYear.orElse(0.0d) * 100); out.printf("%6.0f", stdDev.orElse(0.0d) * 100); if (price <= mav200.orElse(-Double.MAX_VALUE)) out.printf("%2s\n", "X"); else out.println(); }}class TimeSeries { private String ticker; private LocalDate _start; private Map<LocalDate, Double> _adjDictionary; private String _name; private String _assetClass; private String _assetSubClass; public TimeSeries(String ticker, String name, String assetClass, String assetSubClass, List<Event> events) { this.ticker = ticker; this._name = name; this._assetClass = assetClass; this._assetSubClass = assetSubClass; this._adjDictionary = events.stream().collect(toMap(Event::getDate, Event::getPrice)); this._start = events.size() - 1 > 0 ? events.get(events.size() - 1).getDate() : LocalDate.now().minusYears(99); } private static final class FindPriceAndShift { public FindPriceAndShift(boolean found, double aPrice, int shift) { this.found = found; this.price = aPrice; this.shift = shift; } private boolean found; private double price; private int shift; } private FindPriceAndShift getPrice(LocalDate when) { boolean found = false; int shift = 1; double aPrice = 0.0d; while ((when.equals(_start) || when.isAfter(_start)) && !found) { if (found = _adjDictionary.containsKey(when)) { aPrice = _adjDictionary.get(when); } when = when.minusDays(1); shift -= 1; } return new FindPriceAndShift(found, aPrice, shift); } OptionalDouble getReturn(LocalDate start, LocalDate endDate) { FindPriceAndShift foundEnd = getPrice(endDate); FindPriceAndShift foundStart = getPrice(start.plusDays(foundEnd.shift)); if (!foundStart.found || !foundEnd.found) return OptionalDouble.empty(); else { return OptionalDouble.of(foundEnd.price / foundStart.price - 1.0d); } } private OptionalDouble lastWeekReturn() { return getReturn(LocalDate.now().minusDays(7), LocalDate.now()); } private OptionalDouble last4WeeksReturn() { return getReturn(LocalDate.now().minusDays(28), LocalDate.now()); } private OptionalDouble last3MonthsReturn() { return getReturn(LocalDate.now().minusMonths(3), LocalDate.now()); } private OptionalDouble last6MonthsReturn() { return getReturn(LocalDate.now().minusMonths(6), LocalDate.now()); } private OptionalDouble lastYearReturn() { return getReturn(LocalDate.now().minusYears(1), LocalDate.now()); } private Double sum(Collection<Double> d) { return d.parallelStream().reduce(0d, Double::sum); } private Double avg(Collection<Double> d) { return sum(d) / d.size(); } private OptionalDouble stdDev() { LocalDate now = LocalDate.now(); LocalDate limit = now.minusYears(3); List<Double> rets = new ArrayList<>(); while (now.compareTo(_start.plusDays(12)) >= 0 && now.compareTo(limit) >= 0) { OptionalDouble ret = getReturn(now.minusDays(7), now); rets.add(ret.orElse(0d)); now = now.minusDays(7); } Double mean = avg(rets); Double variance = avg(rets.parallelStream().map(r -> Math.pow(r - mean, 2)).collect(toList())); Double weeklyStdDev = Math.sqrt(variance); return OptionalDouble.of(weeklyStdDev * Math.sqrt(40)); } private OptionalDouble MAV200() { return OptionalDouble.of( _adjDictionary.entrySet().parallelStream() .sorted(comparing((Entry<LocalDate,Double> p) -> p.getKey()).reversed()) .limit(200).mapToDouble(e -> e.getValue()).average().orElse(0d) ); } private double todayPrice() { return getPrice(LocalDate.now()).price; } public Summary getSummary() { return new Summary(ticker, _name, _assetClass, _assetSubClass, lastWeekReturn(), last4WeeksReturn(), last3MonthsReturn(), last6MonthsReturn(), lastYearReturn(), stdDev(), todayPrice(), MAV200()); }}public class Program { static String createUrl(String ticker, LocalDate start, LocalDate end) { return "http://ichart.finance.yahoo.com/table.csv?s=" + ticker + "&a=" + (start.getMonthValue() - 1) + "&b=" + start.getDayOfMonth() + "&c=" + start.getYear() + "&d=" + (end.getMonthValue() - 1) + "&e=" + end.getDayOfMonth() + "&f=" + end.getYear() + "&g=d&ignore=.csv"; } public static void main(String[] args) throws IOException, InterruptedException { List<String[]> tickers = Files.lines(FileSystems.getDefault().getPath("ETFs.csv")) .skip(1) .parallel() .map(line -> line.split(",", 4)) .filter(v -> !v[2].equals("Leveraged")) .collect(toList()); int len = tickers.size(); LocalDate start = LocalDate.now().minusYears(2); LocalDate end = LocalDate.now(); CountDownLatch cevent = new CountDownLatch(len); Summary[] summaries = new Summary[len]; try (WebClient webClient = new WebClient()) { for (int i = 0; i < len; i++) { String[] t = tickers.get(i); final int index = i; webClient.downloadStringAsync(createUrl(t[0], start, end), result -> { summaries[index] = downloadStringCompleted(t[0], t[1], t[2], t[3], result); cevent.countDown(); }); } cevent.await(); } Stream<Summary> top15perc = Arrays.stream(summaries) .filter(s -> s.lrs.isPresent()) .sorted(comparing((Summary p) -> p.lrs.get()).reversed()) .limit((int)(len * 0.15)); Stream<Summary> bottom15perc = Arrays.stream(summaries) .filter(s -> s.lrs.isPresent()) .sorted(comparing((Summary p) -> p.lrs.get())) .limit((int)(len * 0.15)); System.out.println(); Summary.banner(); System.out.println("TOP 15%"); top15perc.forEach( s -> s.print()); System.out.println(); Summary.banner(); System.out.println("BOTTOM 15%"); bottom15perc.forEach( s -> s.print()); } public static Summary downloadStringCompleted(String ticker, String name, String asset, String subAsset, DownloadStringAsyncCompletedArgs e ) { Summary summary; if (e.getError() == null) { List<Event> adjustedPrices = Arrays.stream(e.getResult().split("\n")) .skip(1) .parallel() .map(line -> line.split(",", 7)) .filter(l -> l.length == 7) .map(v -> new Event(LocalDate.parse(v[0], DateTimeFormatter.ISO_LOCAL_DATE), Double.valueOf(v[6]))).collect(toList()); TimeSeries timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices); summary = timeSeries.getSummary(); } else { System.err.printf("[%s ERROR]", ticker); final OptionalDouble zero = OptionalDouble.of(0d); summary = new Summary(ticker, name, "ERROR", "ERROR", zero, zero, zero, zero, zero, zero, 0d, zero); } return summary; }} Some observations: • The code depends on the yahoo to get historical stock prices and sometimes Yahoo is not available for stock prices. Wait five minutes and run the program again. • The Java code is much much faster than the C# code, but it is going to yahoo to get historical stock prices which is going to be the limiting factor. I don't think the C# should be slower than the Java code but it is and I'm not sure why it is. I'm pretty sure the poor C# performance is to do with the dot net WebClient configuration but I might be wrong. • In Java 8, just to show how easy it is, I've used parallelStream() and .parallel() in a couple of places, but these can be removed for the equivalent functionality. I can see no noticeable difference in performance with or without these calls when using an 8 core machine. As I said above I believe that the limiting factor is going to yahoo to get historical stock prices. There is not that much number crunching to do and I suspect the time taken to do it pales into insignificance next to the internet fetch time. Doing the calculations in parallel just isn't worth it. But its good to see how easy it is to parallelize work if you want to. Being able to simply say Collection.parallelStream() and Stream.parallel() is incredible if you find a sensible use case for it. • The Java 8 code is a little longer than the C# code. In Java 7, I'm guessing the code would be at least two times longer and very very ugly if written in a similar style. The Java8 code is not as concise as C# or Scala but at least its in the same ball park. Partly this is due to Java POJO boilerplate (e.g. the FindPriceAndShift class and the Event class getter and setters) but thats is no big deal (IMO). The Java code is also more verbose because types must be declared unlike in C# where you can use "var" instead of a type declaration and usually the C# compiler infers what you mean. • Tuples. C# has Tuples, Scala has Tuples but apparently their use is the spawn of satan and civilization will collapse if they are used in Java even to hold temporary results when parsing comma separated values into another class. (Oracle will be removing HashMap from Java9 apparently for similar reasons ;)) In order not to be arrested by the Java thought police I avoided succumbing to this. The C# code uses them, but I've managed to avoid them. • Output parameters. In my scala translation in 2009, my translation to Scala used a return tuple instead of the C# output parameters (which I personally found confusing in the C# algorithm). In the Java 8 version I used a POJO FindPriceAndShift rather than sell my soul to wicked tuple monster. • The C# code uses the "double?" type which is a double that can have an empty value and it means you can write LRS = (fourWeeks + threeMonths + sixMonths + oneYear) / 4 and any of fourWeeksthreeMonthssixMonths, and oneYear can be empty without causing a null pointer exception etc. Java 8 does ship with OptionalDouble. But, strangely, you can't say a.add(b).add(c).divide(d). So I wrote an OptionalDouble class which does do this, so you can say lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d). If you look at the code you can see its almost trivially simple. Writing lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d) is not very pretty compared to the C# or Scala equivalent but a lot of Java people are used to doing this method chaining with BigDecimal: but with OptionalDouble now it can be null/emptyValue safe. (The same thing can easily be done to create a an OptionalBigDecimal class obviously.) (And this OptionalDouble stuff could easily have been done in Java7 too.) • Java does not have a C# style WebClient, so I have taken the open source jetty http client and wrapped it in a simple wrapper to make it look like the C# WebClient. See git hub for the WbClient class. • Java lives on open source. If the C# code is slow because the dot net WebClient is doing something stupid, its hard to find out as its closed source. If the Jetty's Java http client is broken, you can debug the source or switch to apache's http client: the best open source libraries emerge through natural selection. [Update: reaction from Reddit (I love reddit!): Sorry, that is pure bullshit. It is perfectly feasible to debug .Net Framework source code: http://msdn.microsoft.com/en-us/library/cc667410.aspx And no, it doesn't have a bug. They've been working on that for generations, and Microsoft puts serious money and has serious people working on stuff, as opposed to a bunch of unknown random hippie weed smokers financed by random coin slot donations. and even if java was faster it doesn't change the fact that it is a useless dinosaur which gets improvements 10 years after the rest of the mainstream languages. All that crappy bloated unmaintainable event-based async code can be converted to a beautiful sequence of async / await in C# 5.0, whereas you will probably not see anything like that in java in the next 20 years due to it's complete lack of evolution and retardedness.] • There is some surprising missing functionality from the Stream and or Collections. There is no Zip or takeWhile or dropWhile for sequential streams. I'm guessing Java9, guava and others will fill this gap pretty fast. • When I showed the code below to an experienced colleague who has only used Java <= 6 he said "that looks like C++ to me: thats completely unmaintainable". Sigh. • Stream<Summary> top15perc = Arrays.stream(summaries) .filter(s -> s.lrs.isPresent()) .sorted(comparing((Summary p) -> p.lrs.get()).reversed()) .limit((int)(len * 0.15));  ## March 31, 2014 ### Ruminations of a Programmer #### Functional Patterns in Domain Modeling - The Specification Pattern When you model a domain, you model its entities and behaviors. As Eric Evans mentions in his book Domain Driven Design, the focus is on the domain itself. The model that you design and implement must speak the ubiquitous language so that the essence of the domain is not lost in the myriads of incidental complexities that your implementation enforces. While being expressive the model needs to be extensible too. And when we talk about extensibility, one related attribute is compositionality. Functions compose more naturally than objects and In this post I will use functional programming idioms to implement one of the patterns that form the core of domain driven design - the Specification pattern, whose most common use case is to implement domain validation. Eric's book on DDD says regarding the Specification pattern .. It has multiple uses, but one that conveys the most basic concept is that a SPECIFICATION can test any object to see if it satisfies the specified criteria. A specification is defined as a predicate, whereby business rules can be combined by chaining them together using boolean logic. So there's a concept of composition and we can talk about Composite Specification when we talk about this pattern. Various literature on DDD implement this using the Composite design pattern so commonly implemented using class hierarchies and composition. In this post we will use function composition instead. # Specification - Where ? One of the very common confusions that we have when we design a model is where to keep the validation code of an aggregate root or any entity, for that matter. • Should we have the validation as part of the entity ? No, it makes the entity bloated. Also validations may vary based on some context, while the core of the entity remains the same. • Should we have validations as part of the interface ? May be we consume JSON and build entities out of it. Indeed some validations can belong to the interface and don't hesitate to put them there. • But the most interesting validations are those that belong to the domain layer. They are business validations (or specifications), which Eric Evans defines as something that "states a constraint on the state of another object". They are business rules which the entity needs to honor in order to proceed to the next stage of processing. We consider the following simple example. We take an Order entity and the model identifies the following domain "specifications" that a new Order must satisfy before being thrown into the processing pipeline: 1. it must be a valid order obeying the constraints that the domain requires e.g. valid date, valid no of line items etc. 2. it must be approved by a valid approving authority - only then it proceeds to the next stage of the pipeline 3. customer status check must be passed to ensure that the customer is not black-listed 4. the line items from the order must be checked against inventory to see if the order can be fulfilled These are the separate steps that are to be done in sequence by the order processing pipeline as pre-order checks before the actual order is ready for fulfilment. A failure in any of them takes the order out of the pipeline and the process stops there. So the model that we will design needs to honor the sequence as well as check all constraints that need to be done as part of every step. An important point to note here is that none of the above steps mutate the order - so every specification gets a copy of the original Order object as input, on which it checks some domain rules and determines if it's suitable to be passed to the next step of the pipeline. # Jumping on to the implementation .. Let's take down some implementation notes from what we learnt above .. • The Order can be an immutable entity at least for this sequence of operations • Every specification needs an order, can we can pull some trick out of our hat which prevents this cluttering of API by passing an Order instance to every specification in the sequence ? • Since we plan to use functional programming principles, how can we model the above sequence as an expression so that our final result still remains composable with the next process of order fulfilment (which we will discuss in a future post) ? • All these functions look like having similar signatures - we need to make them compose with each other Before I present more of any explanation or theory, here are the basic building blocks which will implement the notes that we took after talking to the domain experts .. type ValidationStatus[S] = \/[String, S]type ReaderTStatus[A, S] = ReaderT[ValidationStatus, A, S]object ReaderTStatus extends KleisliInstances with KleisliFunctions { def apply[A, S](f: A => ValidationStatus[S]): ReaderTStatus[A, S] = kleisli(f)} ValidationStatus defines the type that we will return from each of the functions. It's either some status S or an error string that explains what went wrong. It's actually an Either type (right biased) as implemented in scalaz. One of the things which we thought will be cool is to avoid repeating the Order parameter for every method when we invoke the sequence. And one of the idioamtic ways of doing it is to use the Reader monad. But here we already have a monad - \/ is a monad. So we need to stack them together using a monad transformer. ReaderT does this job and ReaderTStatus defines the type that somehow makes our life easier by combining the two of them. The next step is an implementation of ReaderTStatus, which we do in terms of another abstraction called Kleisli. We will use the scalaz library for this, which implements ReaderT in terms of Kleisli. I will not go into the details of this implementation - in case you are curious, refer to this excellent piece by Eugene. So, how does one sample specification look like ? Before going into that, here are some basic abstractions, grossly simplified only for illustration purposes .. // the base abstractionsealed trait Item { def itemCode: String}// sample implementationscase class ItemA(itemCode: String, desc: Option[String], minPurchaseUnit: Int) extends Itemcase class ItemB(itemCode: String, desc: Option[String], nutritionInfo: String) extends Itemcase class LineItem(item: Item, quantity: Int)case class Customer(custId: String, name: String, category: Int)// a skeleton ordercase class Order(orderNo: String, orderDate: Date, customer: Customer, lineItems: List[LineItem]) And here's a specification that checks some of the constraints on the Order object .. // a basic validationprivate def validate = ReaderTStatus[Order, Boolean] {order => if (order.lineItems isEmpty) left(s"Validation failed for orderorder")   else right(true)}

It's just for illustration and does not contain much domain rules. The important part is how we use the above defined types to implement the function. Order is not an explicit argument to the function - it's curried. The function returns a ReaderTStatus, which itself is a monad and hence allows us to sequence in the pipeline with other specifications. So we get the requirement of sequencing without breaking out of the expression oriented programming style.

Here are a few other specifications based on the domain knowledge that we have gathered ..

private def approve = ReaderTStatus[Order, Boolean] {order =>  right(true)}private def checkCustomerStatus(customer: Customer) = ReaderTStatus[Order, Boolean] {order =>  right(true)}private def checkInventory = ReaderTStatus[Order, Boolean] {order =>  right(true)}

# Wiring them together

But how do we wire these pieces together so that we have the sequence of operations that the domain mandates and yet all goodness of compositionality in our model ? It's actually quite easy since we have already done the hard work of defining the appropriate types that compose ..

Here's the isReadyForFulfilment method that defines the composite specification and invokes all the individual specifications in sequence using for-comprehension, which, as you all know does the monadic bind in Scala and gives us the final expression that needs to be evaluated for the Order supplied.

def isReadyForFulfilment(order: Order) = {  val s = for {    _ <- validate    _ <- approve    _ <- checkCustomerStatus(order.customer)    c <- checkInventory  } yield c  s(order)}

So we have the monadic bind implement the sequencing without breaking the compositionality of the abstractions. In the next instalment we will see how this can be composed with the downstream processing of the order that will not only read stuff from the entity but mutate it too, of course in a functional way.

### Functional Jobs

#### Distributed Systems Engineer (Scala/JVM) at Fauna, Inc. (Full-time)

Distributed Systems Engineer

Founded by the team that scaled Twitter, Fauna is the next-generation database for social, mobile, and games. Join us and become part of a small team of the best software engineers in the world.

How we work

Our work environment is relaxed and individually oriented. We avoid meetings and pair programming. Instead, we require code reviews, and expect you to ask for help when you need it.

We are solving fundamental problems in computer science, so you must approach your work with both humility and rigor. And because our customers place an extreme degree of trust in us, we value pragmatism and personal responsibility very highly.

Benefits

We offer competitive equity, salary, and health benefits, two hypoallergenic cats, and the chance to change the industry forever.

We are family-friendly and support a healthy work-life balance, instead of the crunch and burnout cycle common to startups. In return we ask for your loyalty and hope that you can build your career at Fauna.

We are located in Berkeley, California.

Position

You have designed and implemented multiple distributed systems and operated them in production. You know that you can do better, given the opportunity.

You must have experience with:

• Multiple statically-typed languages
• Asynchronous programming with futures
• Network services
• Commutative replicated datatypes
• Performance analysis

Scala experience is a plus, as is experience with consensus protocols such as Paxos.

Get information on how to apply for this position.

### Quoi qu'il en soit

#### Becoming really rich with Java 8

Disclaimer: the C#, Scala and Java 8 algorithms shown and referenced here implement a "momentum investing" algorithm. This is purely for computer language comparison purposes and should definitely not be taken as investment advice.

In 2009, I saw this post Becoming really rich with C# showcasing the new features in C#  4.5 and was impressed with C# with its hybrid Object-Functional approach and collection APIs to give collection operations a SQL-like feel:

e.Result
.Split(new[] { '\n' })
.Skip(1)
.Select(l => l.Split(new[] { ',' }))
.Where(l => l.Length == 7)
.Select(v => new Event(DateTime.Parse(v[0]), Double.Parse(v[6])));

Now lets do that in 7 lines of code in Java 5, 6, or 7. Er no, sorry.

At the time, I was learning Scala. So I translated Becoming really rich with C# into Scala and compared them side by side. Result:  See http://quoiquilensoit.blogspot.com/2009/10/becoming-really-rich-with-scala.html The result surprised me. I thought C# held up pretty well overall.

So, a full four years later, Oracle owns Java and Java8 is out with some of the same features that C# was offering in dot net 4.5 in 2010. There is obvious missing stuff that Java 8 still does not have LINQ,  Output parameters. Vars. Tuples. Optional/Nullable numerics. But I tried the same exercise, trying to keep in the spirit of the C# code.

The code is on github: https://github.com/azzoti/get-rich-with-java8

git clone https://github.com/azzoti/get-rich-with-java8.git

Its an eclipse maven project, but you can run straight from the command line with:

mvn exec:java

Original C#Java 8
See notes after the table

using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Net;using System.Threading;using System.Threading.Tasks;using System.IO;namespace ETFAnalyzer {struct Event {  internal Event(DateTime date, double price) { Date = date; Price = price; }  internal readonly DateTime Date;  internal readonly double Price;}class Summary {  internal Summary(string ticker, string name, string assetClass,          string assetSubClass, double? weekly, double? fourWeeks,          double? threeMonths, double? sixMonths, double? oneYear,          double? stdDev, double price, double? mav200) {    Ticker = ticker;    Name = name;    AssetClass = assetClass;    AssetSubClass = assetSubClass;    // Abracadabra ...    LRS = (fourWeeks + threeMonths + sixMonths + oneYear) / 4;    Weekly = weekly;    FourWeeks = fourWeeks;    ThreeMonths = threeMonths;    SixMonths = sixMonths;    OneYear = oneYear;    StdDev = stdDev;    Mav200 = mav200;    Price = price;  }  internal readonly string Ticker;  internal readonly string Name;  internal readonly string AssetClass;  internal readonly string AssetSubClass;  internal readonly double? LRS;  internal readonly double? Weekly;  internal readonly double? FourWeeks;  internal readonly double? ThreeMonths;  internal readonly double? SixMonths;  internal readonly double? OneYear;  internal readonly double? StdDev;  internal readonly double? Mav200;  internal double Price;  internal static void Banner() {    Console.Write("{0,-6}", "Ticker");    Console.Write("{0,-50}", "Name");    Console.Write("{0,-12}", "Asset Class");    Console.Write("{0,4}", "RS");    Console.Write("{0,4}", "1Wk");    Console.Write("{0,4}", "4Wk");    Console.Write("{0,4}", "3Ms");    Console.Write("{0,4}", "6Ms");    Console.Write("{0,4}", "1Yr");    Console.Write("{0,6}", "Vol");    Console.WriteLine("{0,2}", "Mv");  }  internal void Print() {    Console.Write("{0,-6}", Ticker);    Console.Write("{0,-50}", new String(Name.Take(48).ToArray()));    Console.Write("{0,-12}", new String(AssetClass.Take(10).ToArray()));    Console.Write("{0,4:N0}", LRS * 100);    Console.Write("{0,4:N0}", Weekly * 100);    Console.Write("{0,4:N0}", FourWeeks * 100);    Console.Write("{0,4:N0}", ThreeMonths * 100);    Console.Write("{0,4:N0}", SixMonths * 100);    Console.Write("{0,4:N0}", OneYear * 100);    Console.Write("{0,6:N0}", StdDev * 100);    if (Price <= Mav200)      Console.WriteLine("{0,2}", "X");    else      Console.WriteLine();  }}class TimeSeries {  internal readonly string Ticker;  readonly DateTime _start;  readonly Dictionary<DateTime, double> _adjDictionary;  readonly string _name;  readonly string _assetClass;  readonly string _assetSubClass;  internal TimeSeries(string ticker, string name, string assetClass, string assetSubClass, IEnumerable<event> events) {    Ticker = ticker;    _name = name;    _assetClass = assetClass;    _assetSubClass = assetSubClass;    _start = events.Last().Date;    _adjDictionary = events.ToDictionary(e => e.Date, e => e.Price);  }  bool GetPrice(DateTime when, out double price, out double shift) {    // To nullify the effect of hours/min/sec/millisec being different from 0    when = new DateTime(when.Year, when.Month, when.Day);    var found = false;    shift = 1;    double aPrice = 0;    while (when >= _start && !found) {      if (_adjDictionary.TryGetValue(when, out aPrice)) {        found = true;      }      when = when.AddDays(-1);      shift -= 1;    }    price = aPrice;    return found;  }  double? GetReturn(DateTime start, DateTime end) {    var startPrice = 0.0;    var endPrice = 0.0;    var shift = 0.0;    var foundEnd = GetPrice(end, out endPrice, out shift);    var foundStart = GetPrice(start.AddDays(shift), out startPrice, out shift);    if (!foundStart || !foundEnd)      return null;    else      return endPrice / startPrice - 1;  }  internal double? LastWeekReturn() {    return GetReturn(DateTime.Now.AddDays(-7), DateTime.Now);  }  internal double? Last4WeeksReturn() {    return GetReturn(DateTime.Now.AddDays(-28), DateTime.Now);  }  internal double? Last3MonthsReturn() {    return GetReturn(DateTime.Now.AddMonths(-3), DateTime.Now);  }  internal double? Last6MonthsReturn() {    return GetReturn(DateTime.Now.AddMonths(-6), DateTime.Now);  }  internal double? LastYearReturn() {    return GetReturn(DateTime.Now.AddYears(-1), DateTime.Now);  }  internal double? StdDev() {    var now = DateTime.Now;    now = new DateTime(now.Year, now.Month, now.Day);    var limit = now.AddYears(-3);    var rets = new List<double>();    while (now >= _start.AddDays(12) && now >= limit) {      var ret = GetReturn(now.AddDays(-7), now);      rets.Add(ret.Value);      now = now.AddDays(-7);    }    var mean = rets.Average();    var variance = rets.Select(r => Math.Pow(r - mean, 2)).Sum();    var weeklyStdDev = Math.Sqrt(variance / rets.Count);    return weeklyStdDev * Math.Sqrt(40);  }  internal double? MAV200() {    return _adjDictionary.ToList()           .OrderByDescending(k => k.Key)           .Take(200).Average(k => k.Value);  }  internal double TodayPrice() {    var price = 0.0;    var shift = 0.0;    GetPrice(DateTime.Now, out price, out shift);    return price;  }  internal Summary GetSummary() {    return new Summary(Ticker, _name, _assetClass, _assetSubClass,           LastWeekReturn(), Last4WeeksReturn(), Last3MonthsReturn(),           Last6MonthsReturn(), LastYearReturn(), StdDev(), TodayPrice(),            MAV200());  }}class Program {  static string CreateUrl(string ticker, DateTime start, DateTime end)  {    return @"http://ichart.finance.yahoo.com/table.csv?s=" + ticker +       "&a="+(start.Month - 1).ToString()+"&b="+start.Day.ToString()+"&c="+start.Year.ToString() +       "&d="+(end.Month - 1).ToString()+"&e="+end.Day.ToString()+"&f="+end.Year.ToString() +       "&g=d&ignore=.csv";  }  static void Main(string[] args) {    // If you rise this above 5 you tend to get frequent connection closing on my machine    // I'm not sure if it is msft network or yahoo web service    ServicePointManager.DefaultConnectionLimit = 10;    var tickers =      File.ReadAllLines("ETFTest.csv")      .Skip(1)      .Select(l => l.Split(new[] { ',' }))      .Where(v => v[2] != "Leveraged")      .Select(values => Tuple.Create(values[0], values[1], values[2], values[3]))      .ToArray();    var len = tickers.Length;    var start = DateTime.Now.AddYears(-2);    var end = DateTime.Now;    var cevent = new CountdownEvent(len);    var summaries = new Summary[len];        for(var i = 0; i < len; i++)  {      var t = tickers[i];      var url = CreateUrl(t.Item1, start, end);      using (var webClient = new WebClient()) {        webClient.DownloadStringCompleted +=                        new DownloadStringCompletedEventHandler(downloadStringCompleted);        webClient.DownloadStringAsync(new Uri(url), Tuple.Create(t, cevent, summaries, i));      }    }    cevent.Wait();    Console.WriteLine("\n");    var top15perc =        summaries        .Where(s => s.LRS.HasValue)        .OrderByDescending(s => s.LRS)        .Take((int)(len * 0.15));    var bottom15perc =        summaries        .Where(s => s.LRS.HasValue)        .OrderBy(s => s.LRS)        .Take((int)(len * 0.15));    Console.WriteLine();    Summary.Banner();    Console.WriteLine("TOP 15%");    foreach(var s in top15perc)      s.Print();    Console.WriteLine();    Console.WriteLine("Bottom 15%");    foreach (var s in bottom15perc)      s.Print();        }  static void downloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) {    var bigTuple = (Tuple<Tuple<string, string, string, string>, CountdownEvent, Summary[], int>)e.UserState;    var tuple = bigTuple.Item1;    var cevent = bigTuple.Item2;    var summaries = bigTuple.Item3;    var i = bigTuple.Item4;    var ticker = tuple.Item1;    var name = tuple.Item2;    var asset = tuple.Item3;    var subAsset = tuple.Item4;    if (e.Error == null) {      var adjustedPrices =          e.Result          .Split(new[] { '\n' })          .Skip(1)          .Select(l => l.Split(new[] { ',' }))          .Where(l => l.Length == 7)          .Select(v => new Event(DateTime.Parse(v[0]), Double.Parse(v[6])));      var timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices);      summaries[i] = timeSeries.GetSummary();      cevent.Signal();      Console.Write("{0} ", ticker);    } else {      Console.WriteLine("[{0} ERROR] ", ticker);      summaries[i] = new Summary(ticker,name,"ERROR","ERROR",0,0,0,0,0,0,0,0);       cevent.Signal();    }  }}}

package etf.analyzer;import static java.lang.System.out;import static java.util.Comparator.comparing;import static java.util.stream.Collectors.*;import java.io.IOException;import java.nio.file.*;import java.time.LocalDate;import java.time.format.DateTimeFormatter;import java.util.*;import java.util.Map.Entry;import java.util.concurrent.CountDownLatch;import java.util.stream.Stream;class Event {  public Event(LocalDate date, double price) {    this.date = date;    this.price = price;  }  public LocalDate getDate() {    return date;  }  public double getPrice() {    return price;  }  private LocalDate date;  private double price;}class Summary {  public Summary(String ticker, String name, String assetClass,      String assetSubClass, OptionalDouble weekly, OptionalDouble fourWeeks,       OptionalDouble threeMonths, OptionalDouble sixMonths, OptionalDouble oneYear,      OptionalDouble stdDev, double price, OptionalDouble mav200) {    this.ticker = ticker;    this.name = name;    this.assetClass = assetClass;    // this.assetSubClass = assetSubClass;    // Abracadabra ...    this.lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d));    this.weekly = weekly;    this.fourWeeks = fourWeeks;    this.threeMonths = threeMonths;    this.sixMonths = sixMonths;    this.oneYear = oneYear;    this.stdDev = stdDev;    this.mav200 = mav200;    this.price = price;  }  private String ticker;  private String name;  private String assetClass;  // private String assetSubClass;  public OptionalDouble lrs;  private OptionalDouble weekly;  private OptionalDouble fourWeeks;  private OptionalDouble threeMonths;  private OptionalDouble sixMonths;  private OptionalDouble oneYear;  private OptionalDouble stdDev;  private OptionalDouble mav200;  private double price;  static void banner() {    out.printf("%-6s", "Ticker");    out.printf("%-50s", "Name");    out.printf("%-12s", "Asset Class");    out.printf("%4s", "RS");    out.printf("%4s", "1Wk");    out.printf("%4s", "4Wk");    out.printf("%4s", "3Ms");    out.printf("%4s", "6Ms");    out.printf("%4s", "1Yr");    out.printf("%6s", "Vol");    out.printf("%2s\n", "Mv");  }  void print() {    out.printf("%-6s", ticker);    out.printf("%-50s", name);    out.printf("%-12s", assetClass);    out.printf("%4.0f", lrs.orElse(0.0d) * 100);    out.printf("%4.0f", weekly.orElse(0.0d) * 100);    out.printf("%4.0f", fourWeeks.orElse(0.0d) * 100);    out.printf("%4.0f", threeMonths.orElse(0.0d) * 100);    out.printf("%4.0f", sixMonths.orElse(0.0d) * 100);    out.printf("%4.0f", oneYear.orElse(0.0d) * 100);    out.printf("%6.0f", stdDev.orElse(0.0d) * 100);    if (price <= mav200.orElse(-Double.MAX_VALUE))      out.printf("%2s\n", "X");    else      out.println();  }}class TimeSeries {  private String ticker;  private LocalDate _start;  private Map<LocalDate, Double> _adjDictionary;  private String _name;  private String _assetClass;  private String _assetSubClass;  public TimeSeries(String ticker, String name, String assetClass, String assetSubClass, List<Event> events) {    this.ticker = ticker;    this._name = name;    this._assetClass = assetClass;    this._assetSubClass = assetSubClass;    this._adjDictionary = events.stream().collect(toMap(Event::getDate, Event::getPrice));    this._start = events.size() - 1 > 0 ? events.get(events.size() - 1).getDate() : LocalDate.now().minusYears(99);  }  private static final class FindPriceAndShift {    public FindPriceAndShift(boolean found, double aPrice, int shift) {        this.found = found;        this.price = aPrice;        this.shift = shift;    }    private boolean found;    private double price;    private int shift;  }    private FindPriceAndShift getPrice(LocalDate when) {    boolean found = false;    int shift = 1;    double aPrice = 0.0d;    while ((when.equals(_start) || when.isAfter(_start)) && !found) {      if (found = _adjDictionary.containsKey(when)) {        aPrice = _adjDictionary.get(when);      }      when = when.minusDays(1);      shift -= 1;    }    return new FindPriceAndShift(found, aPrice, shift);  }    OptionalDouble getReturn(LocalDate start, LocalDate endDate) {    FindPriceAndShift foundEnd = getPrice(endDate);    FindPriceAndShift foundStart = getPrice(start.plusDays(foundEnd.shift));    if (!foundStart.found || !foundEnd.found)      return OptionalDouble.empty();    else {      return OptionalDouble.of(foundEnd.price / foundStart.price - 1.0d);    }  }  private OptionalDouble lastWeekReturn() {    return getReturn(LocalDate.now().minusDays(7), LocalDate.now());  }  private OptionalDouble last4WeeksReturn() {    return getReturn(LocalDate.now().minusDays(28), LocalDate.now());  }  private OptionalDouble last3MonthsReturn() {    return getReturn(LocalDate.now().minusMonths(3), LocalDate.now());  }  private OptionalDouble last6MonthsReturn() {    return getReturn(LocalDate.now().minusMonths(6), LocalDate.now());  }  private OptionalDouble lastYearReturn() {    return getReturn(LocalDate.now().minusYears(1), LocalDate.now());  }  private Double sum(Collection<Double> d) {    return d.parallelStream().reduce(0d, Double::sum);  }  private Double avg(Collection<Double> d) {    return sum(d) / d.size();  }  private OptionalDouble stdDev() {    LocalDate now = LocalDate.now();    LocalDate limit = now.minusYears(3);    List<Double> rets = new ArrayList<>();    while (now.compareTo(_start.plusDays(12)) >= 0 && now.compareTo(limit) >= 0) {      OptionalDouble ret = getReturn(now.minusDays(7), now);      rets.add(ret.orElse(0d));      now = now.minusDays(7);    }    Double mean = avg(rets);    Double variance = avg(rets.parallelStream().map(r -> Math.pow(r - mean, 2)).collect(toList()));    Double weeklyStdDev = Math.sqrt(variance);    return OptionalDouble.of(weeklyStdDev * Math.sqrt(40));  }  private OptionalDouble MAV200() {    return OptionalDouble.of(       _adjDictionary.entrySet().parallelStream()      .sorted(comparing((Entry<LocalDate,Double> p) -> p.getKey()).reversed())      .limit(200).mapToDouble(e -> e.getValue()).average().orElse(0d)    );  }  private double todayPrice() {    return getPrice(LocalDate.now()).price;  }  public Summary getSummary() {    return new Summary(ticker, _name, _assetClass, _assetSubClass,      lastWeekReturn(), last4WeeksReturn(), last3MonthsReturn(),      last6MonthsReturn(), lastYearReturn(), stdDev(), todayPrice(),      MAV200());  }}public class Program {  static String createUrl(String ticker, LocalDate start, LocalDate end) {    return "http://ichart.finance.yahoo.com/table.csv?s=" + ticker + "&a="      + (start.getMonthValue() - 1) + "&b=" + start.getDayOfMonth()      + "&c=" + start.getYear() + "&d=" + (end.getMonthValue() - 1)      + "&e=" + end.getDayOfMonth() + "&f=" + end.getYear()      + "&g=d&ignore=.csv";  }    public static void main(String[] args) throws IOException, InterruptedException {    List<String[]> tickers = Files.lines(FileSystems.getDefault().getPath("ETFs.csv"))      .skip(1)      .parallel()      .map(line -> line.split(",", 4))      .filter(v -> !v[2].equals("Leveraged"))      .collect(toList());        int len = tickers.size();        LocalDate start = LocalDate.now().minusYears(2);    LocalDate end = LocalDate.now();    CountDownLatch cevent = new CountDownLatch(len);    Summary[] summaries = new Summary[len];         try (WebClient webClient = new WebClient()) {      for (int i = 0; i < len; i++) {        String[] t = tickers.get(i);        final int index = i;        webClient.downloadStringAsync(createUrl(t[0], start, end), result -> {            summaries[index] = downloadStringCompleted(t[0], t[1], t[2], t[3], result);            cevent.countDown();        });       }      cevent.await();    }        Stream<Summary> top15perc =      Arrays.stream(summaries)      .filter(s -> s.lrs.isPresent())      .sorted(comparing((Summary p) -> p.lrs.get()).reversed())      .limit((int)(len * 0.15));    Stream<Summary> bottom15perc =      Arrays.stream(summaries)      .filter(s -> s.lrs.isPresent())      .sorted(comparing((Summary p) -> p.lrs.get()))      .limit((int)(len * 0.15));        System.out.println();    Summary.banner();    System.out.println("TOP 15%");    top15perc.forEach(        s -> s.print());        System.out.println();    Summary.banner();    System.out.println("BOTTOM 15%");          bottom15perc.forEach(        s -> s.print());  }  public static Summary downloadStringCompleted(String ticker, String name, String asset, String subAsset,       DownloadStringAsyncCompletedArgs e  ) {      Summary summary;      if (e.getError() == null) {          List<Event> adjustedPrices =             Arrays.stream(e.getResult().split("\n"))            .skip(1)            .parallel()            .map(line -> line.split(",", 7))            .filter(l -> l.length == 7)            .map(v -> new Event(LocalDate.parse(v[0], DateTimeFormatter.ISO_LOCAL_DATE), Double.valueOf(v[6]))).collect(toList());          TimeSeries timeSeries = new TimeSeries(ticker, name, asset, subAsset, adjustedPrices);          summary = timeSeries.getSummary();      } else {          System.err.printf("[%s ERROR]", ticker);          final OptionalDouble zero = OptionalDouble.of(0d);          summary = new Summary(ticker, name, "ERROR", "ERROR", zero, zero, zero, zero, zero, zero, 0d, zero);      }      return summary;  }}

Some observations:
• The code depends on the yahoo to get historical stock prices and sometimes Yahoo is not available for stock prices. Wait five minutes and run the program again.
• The Java code is much much faster than the C# code, but it is going to yahoo to get historical stock prices which is going to be the limiting factor.  I don't think the C# should be slower than the Java code but it is and I'm not sure why it is. I'm pretty sure the poor C# performance is to do with the dot net WebClient configuration but I might be wrong.
• In Java 8, just to show how easy it is, I've used parallelStream() and .parallel() in a couple of places, but these can be removed for the equivalent functionality. I can see no noticeable difference in performance with or without these calls when using an 8 core machine. As I said above I believe that the limiting factor is going to yahoo to get historical stock prices. There is not that much number crunching to do and I suspect the time taken to do it pales into insignificance next to the internet fetch time. Doing the calculations in parallel just isn't worth it. But its good to see how easy it is to parallelize work if you want to. Being able to simply say Collection.parallelStream() and Stream.parallel() is incredible if you find a sensible use case for it.
• The Java 8 code is a little longer than the C# code. In Java 7, I'm guessing the code would be at least two times longer and very very ugly if written in a similar style. The Java8 code is not as concise as C# or Scala but at least its in the same ball park. Partly this is due to Java POJO boilerplate (e.g. the FindPriceAndShift class and the Event class getter and setters) but thats is no big deal (IMO). The Java code is also more verbose because types must be declared unlike in C# where you can use "var" instead of a type declaration and usually the C# compiler infers what you mean.
• Tuples. C# has Tuples, Scala has Tuples but apparently their use is the spawn of satan and civilization will collapse if they are used in Java even to hold temporary results when parsing comma separated values into another class. (Oracle will be removing HashMap from Java9 apparently for similar reasons ;)) In order not to be arrested by the Java thought police I avoided succumbing to this. The C# code uses them, but I've managed to avoid them.
• Output parameters.  In my scala translation in 2009, my translation to Scala used a return tuple instead of the C# output parameters (which I personally found confusing in the C# algorithm). In the Java 8 version I used a POJO FindPriceAndShift rather than sell my soul to wicked tuple monster.
• The C# code uses the "double?" type which is a double that can have an empty value and it means you can write LRS = (fourWeeks + threeMonths + sixMonths + oneYear) / 4 and any of fourWeeksthreeMonthssixMonths, and oneYear can be empty without causing a null pointer exception etc.  Java 8 does ship with OptionalDouble. But, strangely, you can't say a.add(b).add(c).divide(d). So I wrote an OptionalDouble class which does do this, so you can say lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d). If you look at the code you can see its almost trivially simple. Writing lrs = fourWeeks.add(threeMonths).add(sixMonths).add(oneYear).divide(OptionalDouble.of(4.0d) is  not very pretty compared to the C# or Scala equivalent but a lot of Java people are used to doing this method chaining with BigDecimal: but with OptionalDouble now it can be null/emptyValue safe. (The same thing can easily be done to create a an OptionalBigDecimal class obviously.) (And this OptionalDouble stuff could easily have been done in Java7 too.)
• Java does not have a C# style WebClient, so I have taken the open source jetty http client and wrapped it in a simple wrapper to make it look like the C# WebClient. See git hub for the WbClient class.
• Java lives on open source. If the C# code is slow because the dot net WebClient is doing something stupid, its hard to find out as its closed source. If the Jetty's Java http client is  broken, you can debug the source or switch to apache's http client: the best open source libraries emerge through natural selection.
• There is some surprising missing functionality from the Stream and or Collections. There is no Zip or takeWhile or dropWhile for sequential streams. I'm guessing Java9, guava and others will fill this gap pretty fast.
• When I showed the code below to an experienced colleague who has only used Java <= 6 he said "that looks like C++ to me: thats completely unmaintainable". Sigh.
• Stream<summary> top15perc =  Arrays.stream(summaries)          .filter(s -> s.lrs.isPresent())          .sorted(comparing((Summary p) -> p.lrs.get()).reversed())          .limit((int)(len * 0.15)); </summary>

## March 23, 2014

### scala-lang.org

#### Scala 2.10.4 is now available!

We are very happy to announce the final release of Scala 2.10.4!

The release is available for download from scala-lang.org or from Maven Central.

The Scala team and contributors fixed 33 issues since 2.10.3!

In total, 36 RC1 pull requests, 12 RC2 pull requests and 3 RC3 pull requests were merged on GitHub.

### Known Issues

Before reporting a bug, please have a look at these known issues.

### Scala IDE for Eclipse

The Scala IDE with this release built right in is available through the following update-site for Eclipse 4.2/4.3 (Juno/Kepler):

Have a look at the getting started guide for more info.

### New features in the 2.10 series

Since 2.10.4 is strictly a bug-fix release, here’s an overview of the most prominent new features and improvements as introduced in 2.10.0:

• Value Classes

• Implicit Classes

• String Interpolation

• Futures and Promises

• Dynamic and applyDynamic

• Dependent method types:

• def identity(x: AnyRef): x.type = x // the return type says we return exactly what we got
• New ByteCode emitter based on ASM

• Can target JDK 1.5, 1.6 and 1.7

• Emits 1.6 bytecode by default

• Old 1.5 backend is deprecated

• A new Pattern Matcher

• rewritten from scratch to generate more robust code (no more exponential blow-up!)

• code generation and analyses are now independent (the latter can be turned off with -Xno-patmat-analysis)

• Implicits (-implicits flag)

• Diagrams (-diagrams flag, requires graphviz)

• Groups (-groups)

• Modularized Language features

• Parallel Collections are now configurable with custom thread pools

• Akka Actors now part of the distribution

• scala.actors have been deprecated and the akka implementation is now included in the distribution.

• See the actors migration project for more information.

• Performance Improvements

• Faster inliner

• Range#sum is now O(1)

• Update of ForkJoin library

• Fixes in immutable TreeSet/TreeMap

• Improvements to PartialFunctions

• Addition of ??? and NotImplementedError

• Addition of IsTraversableOnce + IsTraversableLike type classes for extension methods

• Deprecations and cleanup

• Floating point and octal literal syntax deprecation

• Removed scala.dbc

### Experimental features

The API is subject to (possibly major) changes in the 2.11.x series, but don’t let that stop you from experimenting with them! A lot of developers have already come up with very cool applications for them. Some examples can be seen at http://scalamacros.org/news/2012/11/05/status-update.html.

#### A big thank you to all the contributors!

#Author
26<notextile>Jason Zaugg</notextile>
5<notextile>Eugene Burmako</notextile>
3<notextile>A. P. Marki</notextile>
3<notextile>Simon Schaefer</notextile>
3<notextile>Mirco Dotta</notextile>
3<notextile>Luc Bourlier</notextile>
2<notextile>Paul Phillips</notextile>
2<notextile>François Garillot</notextile>
1<notextile>Mark Harrah</notextile>
1<notextile>James Ward</notextile>
1<notextile>Heather Miller</notextile>
1<notextile>Roberto Tyley</notextile>

#### Commits and the issues they fixed since v2.10.3

Issue(s)CommitMessage
SI-79025f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
SI-82058ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
SI-8126, SI-6566806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
SI-8146, SI-8146, SI-8146, SI-8146ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
SI-6443, SI-81431baf11a<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile>
SI-81529df2dcc<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile>
SI-8111c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
SI-81112c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
SI-7120, SI-8114, SI-71205876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
SI-7636, SI-6563255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
SI-8104, SI-8104c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
SI-80857e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
SI-8085a12dd9c<notextile>Test demonstrating SI-8085</notextile>
SI-642647562e7<notextile>Revert "SI-6426, importable _."</notextile>
SI-8062f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
SI-7912006e2f2<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile>
SI-8060bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
SI-79955ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
SI-8019c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
SI-8029fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
SI-74398d74fa0<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile>
SI-80109036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
SI-79827d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
SI-69137063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
SI-745802308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
SI-7548652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
SI-7548b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
SI-80053629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
SI-8004696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
SI-7463, SI-8003b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
SI-7280053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
SI-791504df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
SI-7776d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
SI-6546075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
SI-7638, SI-4012e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
SI-751950c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
SI-7519ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
SI-4936, SI-6026e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
SI-60262bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
SI-729525bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
SI-70207b56021<notextile>Disable tests for SI-7020</notextile>
SI-77832ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
SI-7815733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

#### Complete commit list!

shaTitle
5f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
8ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
d167f14<notextile>[nomaster] corrects an error in reify’s documentation</notextile>
806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
cbb88ac<notextile>[nomaster] Update MiMa and use new wildcard filter</notextile>
1baf11a<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile>
9df2dcc<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile>
c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
2c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
5876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
bd4adf5<notextile>More clear implicitNotFound error for ExecutionContext</notextile>
255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
7e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
a12dd9c<notextile>Test demonstrating SI-8085</notextile>
3fa2c97<notextile>Report error on code size overflow, log method name.</notextile>
2aa9da5<notextile>Partially revert f8d8f7d08d.</notextile>
47562e7<notextile>Revert "SI-6426, importable _."</notextile>
f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
9cdbe28<notextile>Fixup #3248 missed a spot in pack.xml</notextile>
006e2f2<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile>
bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
e555106<notextile>Remove docs/examples; they reside at scala/scala-dist</notextile>
dc6dd58<notextile>Remove unused android test and corresponding license.</notextile>
f8d8f7d<notextile>Do not distribute partest and its dependencies.</notextile>
5ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
8d74fa0<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile>
9036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
3faa2ee<notextile>[nomaster] better error messages for various macro definition errors</notextile>
7d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
7063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
02308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
0c963c9<notextile>[nomaster] teaches toolbox about -Yrangepos</notextile>
3629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
eb9f0f7<notextile>[nomaster] Adds test cases for scope completion</notextile>
3a8796d<notextile>[nomaster] Test infrastructure for scope completion</notextile>
04df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
ec89b59<notextile>Upgrade pax-url-aether to 1.6.0.</notextile>
1d29c0a<notextile>[backport] Add buildcharacter.properties to .gitignore.</notextile>
852a947<notextile>Allow retrieving STARR from non-standard repo for PR validation</notextile>
40af1e0<notextile>Allow publishing only core (pr validation)</notextile>
ba0718f<notextile>Render relevant properties to buildcharacter.properties</notextile>
d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
6045a05<notextile>Fix completion after application with implicit arguments</notextile>
075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
50c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
2bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
25bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
7b56021<notextile>Disable tests for SI-7020</notextile>
8986ee4<notextile>Disable flaky presentation compiler test.</notextile>
2ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
ee9138e<notextile>Bump version to 2.10.4 for nightlies</notextile>
733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

## March 19, 2014

### scala-lang.org

#### Scala 2.11.0-RC3 is now available!

We are very pleased to announce Scala 2.11.0-RC3, the second (sic) release candidate of Scala 2.11.0! Download it now from scala-lang.org or via Maven Central.

There won’t be an RC2 release because we missed a blocker issue (thanks for the reminder, Chee Seng!). Unfortunately, the mistake wasn’t caught until after the tag was pushed. Jason quickly fixed the bug, which is the only difference between RC3 and RC2.

Please do try out this release candidate to help us find any serious regressions before the final release. The next release candidate (or the final) will be cut on Friday March 28, if there are no unresolved blocker bugs at noon (PST). Our goal is to have the next release be the final – please help us make sure there are no important regressions!

Code that compiled on 2.10.x without deprecation warnings should compile on 2.11.x (we do no guarantee this for experimental APIs, such as reflection). If not, please file a regression. We are working with the community to ensure availability of the core projects of the Scala 2.11.x eco-system, please see below for a list. This release is not binary compatible with the 2.10.x series, to allow us to keep improving the Scala standard library.

For production use, we recommend the latest stable release, 2.10.4.

The Scala 2.11.x series targets Java 6, with (evolving) experimental support for Java 8. In 2.11.0, Java 8 support is mostly limited to reading Java 8 bytecode and parsing Java 8 source. Stay tuned for more complete (experimental) Java 8 support.

The Scala team and contributors fixed 601 bugs that are exclusive to Scala 2.11.0-RC3! We also backported as many as possible. With the release of 2.11, 2.10 backports will be dialed back.

Since the last RC, we fixed 54 issues via 37 merged pull requests.

A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, participating in mailing lists and other public fora, and – of course – submitting and reviewing pull requests! You are all awesome.

Concretely, according to git log --no-merges --oneline master --not 2.10.x --format='%aN' | sort | uniq -c | sort -rn, 111 people contributed code, tests, and/or documentation to Scala 2.11.x: Paul Phillips, Jason Zaugg, Eugene Burmako, Adriaan Moors, Den Shabalin, Simon Ochsenreither, A. P. Marki, Miguel Garcia, James Iry, Denys Shabalin, Rex Kerr, Grzegorz Kossakowski, Vladimir Nikolaev, Eugene Vigdorchik, François Garillot, Mirco Dotta, Rüdiger Klaehn, Raphael Jolly, Kenji Yoshida, Paolo Giarrusso, Antoine Gourlay, Hubert Plociniczak, Aleksandar Prokopec, Simon Schaefer, Lex Spoon, Andrew Phillips, Sébastien Doeraene, Luc Bourlier, Josh Suereth, Jean-Remi Desjardins, Vojin Jovanovic, Vlad Ureche, Viktor Klang, Valerian, Prashant Sharma, Pavel Pavlov, Michael Thorpe, Jan Niehusmann, Heejong Lee, George Leontiev, Daniel C. Sobral, Christoffer Sawicki, yllan, rjfwhite, Volkan Yazıcı, Ruslan Shevchenko, Robin Green, Olivier Blanvillain, Lukas Rytz, Iulian Dragos, Ilya Maykov, Eugene Yokota, Erik Osheim, Dan Hopkins, Chris Hodapp, Antonio Cunei, Andriy Polishchuk, Alexander Clare, 杨博, srinivasreddy, secwall, nermin, martijnhoekstra, jinfu-leng, folone, Yaroslav Klymko, Xusen Yin, Trent Ogren, Tobias Schlatter, Thomas Geier, Stuart Golodetz, Stefan Zeiger, Scott Carey, Samy Dindane, Sagie Davidovich, Runar Bjarnason, Roland Kuhn, Roberto Tyley, Robert Nix, Robert Ladstätter, Rike-Benjamin Schuppner, Rajiv, Philipp Haller, Nada Amin, Mike Morearty, Michael Bayne, Mark Harrah, Luke Cycon, Lee Mighdoll, Konstantin Fedorov, Julio Santos, Julien Richard-Foy, Juha Heljoranta, Johannes Rudolph, Jiawei Li, Jentsch, Jason Swartz, James Ward, James Roper, Havoc Pennington, Evgeny Kotelnikov, Dmitry Petrashko, Dmitry Bushev, David Hall, Daniel Darabos, Dan Rosen, Cody Allen, Carlo Dapor, Brian McKenna, Andrey Kutejko, Alden Torres.

Thank you all very much.

If you find any errors or omissions in these relates notes, please submit a PR!

### Reporting Bugs / Known Issues

Please file any bugs you encounter. If you’re unsure whether something is a bug, please contact the scala-user mailing list.

Before reporting a bug, please have a look at these known issues.

### Scala IDE for Eclipse

The Scala IDE with this release built in is available from this update site for Eclipse 4.2/4.3 (Juno/Kepler). Please have a look at the getting started guide for more info.

### Available projects

The following Scala projects have already been released against 2.11.0-RC3! We’d love to include yours in this list as soon as it’s available – please submit a PR to update these release notes.

"org.scalacheck"         %% "scalacheck"         % "1.11.3"
"org.scalafx"            %% "scalafx"            % "1.0.0-R8"
"org.scalafx"            %% "scalafx"            % "8.0.0-R4"
"com.typesafe.akka"      %% "akka-actor"         % "2.3.0"
"com.github.scopt"       %% "scopt"              % "3.2.0"
"org.scalatest"          %% "scalatest"          % "2.1.2"
"org.specs2"             %% "specs2"             % "2.3.10"
"org.scalaz"             %% "scalaz-core"        % "7.0.6"
"org.scala-lang.modules" %% "scala-async"        % "0.9.0"

The following projects were released against 2.11.0-RC1, with an RC3 build hopefully following soon:

"io.argonaut"            %% "argonaut"           % "6.0.3"
"com.nocandysw"          %% "platform-executing" % "0.5.0"
"com.clarifi"            %% "f0"                 % "1.1.1"
"org.parboiled"          %% "parboiled-scala"    % "1.1.6"
"com.sksamuel.scrimage"  %% "scrimage"           % "1.3.16"

### Cross-building with sbt 0.13

When cross-building between Scala versions, you often need to vary the versions of your dependencies. In particular, the new scala modules (such as scala-xml) are no longer included in scala-library, so you’ll have to add an explicit dependency on it to use Scala’s xml support.

Here’s how we recommend handling this in sbt 0.13. For the full build and Maven build, see example.

scalaVersion        := "2.11.0-RC3"

crossScalaVersions  := Seq("2.11.0-RC3", "2.10.3")

// add scala-xml dependency when needed (for Scala 2.11 and newer)
// this mechanism supports cross-version publishing
libraryDependencies := {
CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, scalaMajor)) if scalaMajor >= 11 =>
libraryDependencies.value :+ "org.scala-lang.modules" %% "scala-xml" % "1.0.0"
case _ =>
libraryDependencies.value
}
}

### Important changes

For most cases, code that compiled under 2.10.x without deprecation warnings should not be affected. We’ve verified this by compiling a sizeable number of open source projects.

Changes to the reflection API may cause breakages, but these breakages can be easily fixed in a manner that is source-compatible with Scala 2.10.x. Follow our reflection/macro changelog for detailed instructions.

We’ve decided to fix the following more obscure deviations from specified behavior without deprecating them first.

• SI-4577 Compile x match { case _ : Foo.type => } to Foo eq x, as specified. It used to be Foo == x (without warning). If that’s what you meant, write case Foo =>.
• SI-7475 Improvements to access checks, aligned with the spec (see also the linked issues). Most importantly, private members are no longer inherited. Thus, this does not type check: class Foo[T] { private val bar: T = ???; new Foo[String] { bar: String } }, as the bar in bar: String refers to the bar with type T. The Foo[String]’s bar is not inherited, and thus not in scope, in the refinement. (Example from SI-8371, see also SI-8426.)

The following changes were made after a deprecation cycle (Thank you, @soc, for leading the deprecation effort!)

• SI-6809 Case classes without a parameter list are no longer allowed.
• SI-7618 Octal number literals no longer supported.

Finally, some notable improvements and bug fixes:

• SI-7296 Case classes with > 22 parameters are now allowed.
• SI-3346 Implicit arguments of implicit conversions now guide type inference.
• SI-6240 Thread safety of reflection API.
• #3037 Experimental support for SAM synthesis.
• #2848 Name-based pattern-matching.
• SI-6169 Infer bounds of Java-defined existential types.
• SI-6566 Right-hand sides of type aliases are now considered invariant for variance checking.
• SI-5917 Improve public AST creation facilities.
• SI-8063 Expose much needed methods in public reflection/macro API.
• SI-8126 Add -Xsource option (make 2.11 type checker behave like 2.10 where possible).

To catch future changes like this early, you can run the compiler under -Xfuture, which makes it behave like the next major version, where possible, to alert you to upcoming breaking changes.

### Deprecations

Deprecation is essential to two of the 2.11.x series’ three themes (faster/smaller/stabler). They make the language and the libraries smaller, and thus easier to use and maintain, which ultimately improves stability. We are very proud of Scala’s first decade, which brought us to where we are, and we are actively working on minimizing the downsides of this legacy, as exemplified by 2.11.x’s focus on deprecation, modularization and infrastructure work.

The following language “warts” have been deprecated:

• SI-7605 Procedure syntax (only under -Xfuture).
• SI-5479 DelayedInit. We will continue support for the important extends App idiom. We won’t drop DelayedInit until there’s a replacement for important use cases. (More details and a proposed alternative.)
• SI-6455 Rewrite of .withFilter to .filter: you must implement withFilter to be compatible with for-comprehensions.
• SI-8035 Automatic insertion of () on missing argument lists.
• SI-6675 Auto-tupling in patterns.
• SI-7247 NotNull, which was never fully implemented – slated for removal in 2.12.
• SI-1503 Unsound type assumption for stable identifier and literal patterns.
• SI-7629 View bounds (under -Xfuture).

We’d like to emphasize the following library deprecations:

• #3103, #3191, #3582 Collection classes and methods that are (very) difficult to extend safely have been slated for being marked final. Proxies and wrappers that were not adequately implemented or kept up-to-date have been deprecated, along with other minor inconsistencies.
• scala-actors is now deprecated and will be removed in 2.12; please follow the steps in the Actors Migration Guide to port to Akka Actors
• SI-7958 Deprecate scala.concurrent.future and scala.concurrent.promise
• SI-3235 Deprecate round on Int and Long (#3581).
• We are looking for maintainers to take over the following modules: scala-swing, scala-continuations. 2.12 will not include them if no new maintainer is found. We will likely keep maintaining the other modules (scala-xml, scala-parser-combinators), but help is still greatly appreciated.

Deprecation is closely linked to source and binary compatibility. We say two versions are source compatible when they compile the same programs with the same results. Deprecation requires qualifying this statement: “assuming there are no deprecation warnings”. This is what allows us to evolve the Scala platform and keep it healthy. We move slowly to guarantee smooth upgrades, but we want to keep improving as well!

### Binary Compatibility

When two versions of Scala are binary compatible, it is safe to compile your project on one Scala version and link against another Scala version at run time. Safe run-time linkage (only!) means that the JVM does not throw a (subclass of) LinkageError when executing your program in the mixed scenario, assuming that none arise when compiling and running on the same version of Scala. Concretely, this means you may have external dependencies on your run-time classpath that use a different version of Scala than the one you’re compiling with, as long as they’re binary compatibile. In other words, separate compilation on different binary compatible versions does not introduce problems compared to compiling and running everything on the same version of Scala.

We check binary compatibility automatically with MiMa. We strive to maintain a similar invariant for the behavior (as opposed to just linkage) of the standard library, but this is not checked mechanically (Scala is not a proof assistant so this is out of reach for its type system).

#### Forwards and Back

We distinguish forwards and backwards compatibility (think of these as properties of a sequence of versions, not of an individual version). Maintaining backwards compatibility means code compiled on an older version will link with code compiled with newer ones. Forwards compatibility allows you to compile on new versions and run on older ones.

Thus, backwards compatibility precludes the removal of (non-private) methods, as older versions could call them, not knowing they would be removed, whereas forwards compatibility disallows adding new (non-private) methods, because newer programs may come to depend on them, which would prevent them from running on older versions (private methods are exempted here as well, as their definition and call sites must be in the same compilation unit).

These are strict constraints, but they have worked well for us in the Scala 2.10.x series. They didn’t stop us from fixing 372 issues in the 2.10.x series post 2.10.0. The advantages are clear, so we will maintain this policy in the 2.11.x series, and are looking (but not yet commiting!) to extend it to include major versions in the future.

#### Concretely

Just like the 2.10.x series, we guarantee forwards and backwards compatibility of the "org.scala-lang" % "scala-library" % "2.11.x" and "org.scala-lang" % "scala-reflect" % "2.11.x" artifacts, except for anything under the scala.reflect.internal package, as scala-reflect is still experimental. We also strongly discourage relying on the stability of scala.concurrent.impl and scala.reflect.runtime, though we will only break compatibility for severe bugs here.

Note that we will only enforce backwards binary compatibility for the new modules (artifacts under the groupId org.scala-lang.modules). As they are opt-in, it’s less of a burden to require having the latest version on the classpath. (Without forward compatibility, the latest version of the artifact must be on the run-time classpath to avoid linkage errors.)

Finally, Scala 2.11.0 introduces scala-library-all to aggregate the modules that constitute a Scala release. Note that this means it does not provide forward binary compatibility, whereas the core scala-library artifact does. We consider the versions of the modules that "scala-library-all" % "2.11.x" depends on to be the canonical ones, that are part of the official Scala distribution. (The distribution itself is defined by the new scala-dist maven artifact.)

### New features in the 2.11 series

This release contains all of the bug fixes and improvements made in the 2.10 series, as well as:

• Collections

• Immutable HashMaps and HashSets perform faster filters, unions, and the like, with improved structural sharing (lower memory usage or churn).
• Mutable LongMap and AnyRefMap have been added to provide improved performance when keys are Long or AnyRef (performance enhancement of up to 4x or 2x respectively).
• BigDecimal is more explicit about rounding and numeric representations, and better handles very large values without exhausting memory (by avoiding unnecessary conversions to BigInt).
• List has improved performance on map, flatMap, and collect.
• See also Deprecation above: we have slated many classes and methods to become final, to clarify which classes are not meant to be subclassed and to facilitate future maintenance and performance improvements.
• Modularization

• The core Scala standard library jar has shed 20% of its bytecode. The modules for xml, parsing, swing as well as the (unsupported) continuations plugin and library are available individually or via scala-library-all. Note that this artifact has weaker binary compatibility guarantees than scala-library – as explained above.
• The compiler has been modularized internally, to separate the presentation compiler, scaladoc and the REPL. We hope this will make it easier to contribute. In this release, all of these modules are still packaged in scala-compiler.jar. We plan to ship them in separate JARs in 2.12.x.
• Reflection, macros and quasiquotes

• Please see this detailed changelog that lists all significant changes and provides advice on forward and backward compatibility.
• See also this summary of the experimental side of the 2.11 development cycle.
• #3321 introduced Sprinter, a new AST pretty-printing library! Very useful for tools that deal with source code.
• Back-end

• The GenBCode back-end (experimental in 2.11). See @magarciaepfl’s extensive documentation.
• A new experimental way of compiling closures, implemented by @JamesIry. With -Ydelambdafy:method anonymous functions are compiled faster, with a smaller bytecode footprint. This works by keeping the function body as a private (static, if no this reference is needed) method of the enclosing class, and at the last moment during compilation emitting a small anonymous class that extends FunctionN and delegates to it. This sets the scene for a smooth migration to Java 8-style lambdas (not yet implemented).
• Branch elimination through constant analysis #2214
• Compiler Performance

• Incremental compilation has been improved significantly. To try it out, upgrade to sbt 0.13.2-M2 and add incOptions := incOptions.value.withNameHashing(true) to your build! Other build tools are also supported. More info at this sbt issue – that’s where most of the work happened. More features are planned, e.g. class-based tracking.
• We’ve been optimizing the batch compiler’s performance as well, and will continue to work on this during the 2.11.x cycle.
• Improve performance of reflection SI-6638
• REPL

• Warnings * Warn about unused private / local terms and types, and unused imports, under -Xlint. This will even tell you when a local var could be a val.

• Slimming down the compiler

• The experimental .NET backend has been removed from the compiler.
• Scala 2.10 shipped with new implementations of the Pattern Matcher and the Bytecode Emitter. We have removed the old implementations.
• Search and destroy mission for ~5000 chunks of dead code. #1648

Scala is now distributed under the standard 3-clause BSD license. Originally, the same 3-clause BSD license was adopted, but slightly reworded over the years, and the “Scala License” was born. We’re now back to the standard formulation to avoid confusion.

## March 13, 2014

### Functional Jobs

#### Senior Engineer & Architect - Clojure, JavaScript and more at Q-Centrix (Full-time)

Help us build an amazing team from the ground up in San Diego! As Senior Engineer, you’ll lead selection for our full technology stack and architect services-oriented solutions to our most complex problems.

Our exciting technical challenges help reduce healthcare costs. You’ll:

• Apply machine learning to help hospitals allocate nursing resources effectively
• Build awesome, next-generation healthcare applications impacting quality of care
• Implement a stable and highly available infrastructure focused on security (HIPAA)
• Coordinate day-to-day work automatically for hundreds of internal operations staff
• Work with and contribute to the best open source tools, languages and frameworks

A general list of requirements:

• Several years web development or related software engineering experience
• Demonstrated success as a team lead or senior engineer and mentorship skills
• Expertise with at least one of: Clojure, Scala, JavaScript
• A test-first mentality with the ability to make stable, working products quickly
• Want to live or move to San Diego and help mold a new company’s culture

We will be a small team and we want people that love getting their hands dirty. We’re serious about building a great place to work in San Diego. Please get in touch to talk about some of our ideas for promoting engineer growth:

• Conference travel and attendance at least once yearly, more for presenters
• Host internal/external speakers for tech talks, possibly open to the public
• Allow for flexible work hours and the opportunity to work from home
• Start a book library in the office with company purchased books on a range of topics
• Set aside some time each week for experimental projects (i.e., 20% time)

We also offer competitive salaries, insurance, and a 401k plan. You will report directly to the VP of Technology & Development, a fellow engineer. Team management responsibilities are flexible depending on your preference.

Formed in 2010, Q-Centrix provides outsourced clinical data abstraction, analysis, and reporting services to hospitals. Q-Centrix is the largest and fastest growing provider of quality related outsourcing services in the nation. A recent partnership with growth-focused private equity firm Sterling Partners gives Q-Centrix the resources and managerial expertise to continue growing at a rapid rate.

Get information on how to apply for this position.

#### Clojure(Script) Web Application Developer at Q-Centrix (Full-time)

Healthcare is complex. It also offers amazing opportunities to build applications that make a significant impact directly to people's lives by helping hospitals improve the quality of care they deliver. At Q-Centrix, we extract and use quality-of-care data to drive results that matter. Today, our tools and services are helping hundreds of hospitals throughout the US take action to improve care.

As a team member, you'll help reduce healthcare costs and:

• Apply machine learning to help hospitals allocate nursing resources effectively
• Build awesome web applications based around Clojure APIs and ClojureScript front-ends (think Om)
• Coordinate day-to-day work automatically for hundreds of internal operations staff
• Work with and contribute to the best open source tools, languages and frameworks

A general list of requirements:

• Several years web development experience
• At least 6 months of experience with Clojure in a production setting
• Demonstrated success as a team lead or senior engineer with mentorship skills
• A test-first mentality with the ability to make stable, working products quickly
• We deal with protected health information so you must be located in the US
• Experience with both Java and JavaScript are nice to have

We will be a small team and we want people that love getting their hands dirty. We’re serious about building a great place to work in San Diego. Please get in touch to talk about some of our ideas for promoting engineer growth.

We also offer competitive salaries, insurance, and a 401k plan. You will report directly to the VP of Technology & Development, a fellow engineer.

Formed in 2010, Q-Centrix provides outsourced clinical data abstraction, analysis, and reporting services to hospitals. Q-Centrix is the largest and fastest growing provider of quality related outsourcing services in the nation. A recent partnership with growth-focused private equity firm Sterling Partners gives Q-Centrix the resources and managerial expertise to continue growing at a rapid rate.

Get information on how to apply for this position.

## March 05, 2014

### scala-lang.org

#### Scala 2.11.0-RC1 is now available!

We are very pleased to announce the first release candidate of Scala 2.11.0! Download it now from scala-lang.org or via Maven Central.

Please do try out this release candidate to help us find any serious regressions before the final release. The next release candidate will be cut on Monday March 17, if there are no unresolved blocker bugs at noon (PST). Subsequent RCs will be released on a weekly schedule, with Monday at noon (PST) being the cut-off for blocker bug reports. Our goal is to have no more than three RCs for this release – please help us achieve this by testing your project soon!

Code that compiled on 2.10.x without deprecation warnings should compile on 2.11.x (we do no guarantee this for experimental APIs, such as reflection). If not, please file a regression. We are working with the community to ensure availability of the core projects of the Scala 2.11.x eco-system, please see below for a list. This release is not binary compatible with the 2.10.x series, to allow us to keep improving the Scala standard library.

For production use, we recommend the latest stable release, 2.10.3 (2.10.4 final coming soon).

The Scala 2.11.x series targets Java 6, with (evolving) experimental support for Java 8. In 2.11.0, Java 8 support is mostly limited to reading Java 8 bytecode and parsing Java 8 source. Stay tuned for more complete (experimental) Java 8 support.

The Scala team and contributors fixed 544 bugs that are exclusive to Scala 2.11.0-RC1! We also backported as many as possible. With the release of 2.11, 2.10 backports will be dialed back.

Since the last milestone, we fixed 133 issues via 154 merged pull requests.

A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, participating in mailing lists and other public fora, and – of course – submitting and reviewing pull requests! You are all awesome.

Concretely, between Jan 2013 and today, 69 contributors have helped improve Scala!

With special thanks to the team at EPFL: @xeno-by, @densh, @magarciaEPFL, @VladimirNik, @lrytz, @VladUreche, @heathermiller, @sjrd, @hubertp, @OlivierBlanvillain, @namin, @cvogt, @vjovanov.

If you find any errors or omissions in these relates notes, please submit a PR!

### Reporting Bugs / Known Issues

Please file any bugs you encounter. If you’re unsure whether something is a bug, please contact the scala-user mailing list.

Before reporting a bug, please have a look at these known issues.

### Scala IDE for Eclipse

The Scala IDE with this release built in is available from this update site for Eclipse 4.2/4.3 (Juno/Kepler). Please have a look at the getting started guide for more info.

### Available projects

The following Scala projects have already been released against 2.11.0-RC1! We’d love to include yours in this list as soon as it’s available – please submit a PR to update these release notes.

"org.scalacheck"    %% "scalacheck"         % "1.11.3"
"org.scalafx"       %% "scalafx"            % "1.0.0-R8"
"org.scalatest"     %% "scalatest"          % "2.1.0"
"org.specs2"        %% "specs2"             % "2.3.9"
"com.typesafe.akka" %% "akka-actor"         % "2.3.0-RC4"
"org.scalaz"        %% "scalaz-core"        % "7.0.6"
"com.nocandysw"     %% "platform-executing" % "0.5.0"

NOTE: RC1 ships with akka-actor 2.3.0-RC4 (the final is out now, but wasn’t yet available when RC1 was cut). The next Scala 2.11 RC will ship with akka-actor 2.3.0 final.

### Cross-building with sbt 0.13

When cross-building between Scala versions, you often need to vary the versions of your dependencies. In particular, the new scala modules (such as scala-xml) are no longer included in scala-library, so you’ll have to add an explicit dependency on it to use Scala’s xml support.

Here’s how we recommend handling this in sbt 0.13. For the full build, see @gkossakowski’s example.

scalaVersion        := "2.11.0-RC1"

crossScalaVersions  := Seq("2.11.0-RC1", "2.10.3")

// add scala-xml dependency when needed (for Scala 2.11 and newer)
// this mechanism supports cross-version publishing
libraryDependencies := {
CrossVersion.partialVersion(scalaVersion.value) match {
case Some((2, scalaMajor)) if scalaMajor >= 11 =>
libraryDependencies.value :+ "org.scala-lang.modules" %% "scala-xml" % "1.0.0"
case _ =>
libraryDependencies.value
}
}

### Important changes

For most cases, code that compiled under 2.10.x without deprecation warnings should not be affected. We’ve verified this by compiling a sizeable number of open source projects.

Changes to the reflection API may cause breakages, but these breakages can be easily fixed in a manner that is source-compatible with Scala 2.10.x. Follow our reflection/macro changelog for detailed instructions.

We’ve decided to fix the following more obscure deviations from specified behavior without deprecating them first.

• SI-4577 Compile x match { case _ : Foo.type => } to Foo eq x, as specified. It used to be Foo == x (without warning). If that’s what you meant, write case Foo =>.

The following changes were made after a deprecation cycle (Thank you, @soc, for leading the deprecation effort!)

• SI-6809 Case classes without a parameter list are no longer allowed.
• SI-7618 Octal number literals no longer supported.

Finally, some notable improvements and bug fixes:

• SI-7296 Case classes with > 22 parameters are now allowed.
• SI-3346 Implicit arguments of implicit conversions now guide type inference.
• SI-6240 Thread safety of reflection API.
• #3037 Experimental support for SAM synthesis.
• #2848 Name-based pattern-matching.
• SI-7475 Improvements to access checks, aligned with the spec (see also the linked issues).
• SI-6169 Infer bounds of Java-defined existential types.
• SI-6566 Right-hand sides of type aliases are now considered invariant for variance checking.
• SI-5917 Improve public AST creation facilities.
• SI-8063 Expose much needed methods in public reflection/macro API.
• SI-8126 Add -Xsource option (make 2.11 type checker behave like 2.10 where possible).

To catch future changes like this early, you can run the compiler under -Xfuture, which makes it behave like the next major version, where possible, to alert you to upcoming breaking changes.

### Deprecations

Deprecation is essential to two of the 2.11.x series’ three themes (faster/smaller/stabler). They make the language and the libraries smaller, and thus easier to use and maintain, which ultimately improves stability. We are very proud of Scala’s first decade, which brought us to where we are, and we are actively working on minimizing the downsides of this legacy, as exemplified by 2.11.x’s focus on deprecation, modularization and infrastructure work.

The following language “warts” have been deprecated:

• SI-7605 Procedure syntax (only under -Xfuture).
• SI-5479 DelayedInit. We will continue support for the important extends App idiom. We won’t drop DelayedInit until there’s a replacement for important use cases. (More details and a proposed alternative.)
• SI-6455 Rewrite of .withFilter to .filter: you must implement withFilter to be compatible with for-comprehensions.
• SI-8035 Automatic insertion of () on missing argument lists.
• SI-6675 Auto-tupling in patterns.
• SI-7247 NotNull, which was never fully implemented – slated for removal in 2.12.
• SI-1503 Unsound type assumption for stable identifier and literal patterns.
• SI-7629 View bounds (under -Xfuture).

We’d like to emphasize the following library deprecations:

• #3103, #3191, #3582 Collection classes and methods that are (very) difficult to extend safely have been slated for being marked final. Proxies and wrappers that were not adequately implemented or kept up-to-date have been deprecated, along with other minor inconsistencies.
• scala-actors is now deprecated and will be removed in 2.12; please follow the steps in the Actors Migration Guide to port to Akka Actors
• SI-7958 Deprecate scala.concurrent.future and scala.concurrent.promise
• SI-3235 Deprecate round on Int and Long (#3581).
• We are looking for maintainers to take over the following modules: scala-swing, scala-continuations. 2.12 will not include them if no new maintainer is found. We will likely keep maintaining the other modules (scala-xml, scala-parser-combinators), but help is still greatly appreciated.

Deprecation is closely linked to source and binary compatibility. We say two versions are source compatible when they compile the same programs with the same results. Deprecation requires qualifying this statement: “assuming there are no deprecation warnings”. This is what allows us to evolve the Scala platform and keep it healthy. We move slowly to guarantee smooth upgrades, but we want to keep improving as well!

### Binary Compatibility

When two versions of Scala are binary compatible, it is safe to compile your project on one Scala version and link against another Scala version at run time. Safe run-time linkage (only!) means that the JVM does not throw a (subclass of) LinkageError when executing your program in the mixed scenario, assuming that none arise when compiling and running on the same version of Scala. Concretely, this means you may have external dependencies on your run-time classpath that use a different version of Scala than the one you’re compiling with, as long as they’re binary compatibile. In other words, separate compilation on different binary compatible versions does not introduce problems compared to compiling and running everything on the same version of Scala.

We check binary compatibility automatically with MiMa. We strive to maintain a similar invariant for the behavior (as opposed to just linkage) of the standard library, but this is not checked mechanically (Scala is not a proof assistant so this is out of reach for its type system).

#### Forwards and Back

We distinguish forwards and backwards compatibility (think of these as properties of a sequence of versions, not of an individual version). Maintaining backwards compatibility means code compiled on an older version will link with code compiled with newer ones. Forwards compatibility allows you to compile on new versions and run on older ones.

Thus, backwards compatibility precludes the removal of (non-private) methods, as older versions could call them, not knowing they would be removed, whereas forwards compatibility disallows adding new (non-private) methods, because newer programs may come to depend on them, which would prevent them from running on older versions (private methods are exempted here as well, as their definition and call sites must be in the same compilation unit).

These are strict constraints, but they have worked well for us in the Scala 2.10.x series. They didn’t stop us from fixing 372 issues in the 2.10.x series post 2.10.0. The advantages are clear, so we will maintain this policy in the 2.11.x series, and are looking (but not yet commiting!) to extend it to include major versions in the future.

#### Concretely

Just like the 2.10.x series, we guarantee forwards and backwards compatibility of the "org.scala-lang" % "scala-library" % "2.11.x" and "org.scala-lang" % "scala-reflect" % "2.11.x" artifacts, except for anything under the scala.reflect.internal package, as scala-reflect is still experimental. We also strongly discourage relying on the stability of scala.concurrent.impl and scala.reflect.runtime, though we will only break compatibility for severe bugs here.

Note that we will only enforce backwards binary compatibility for the new modules (artifacts under the groupId org.scala-lang.modules). As they are opt-in, it’s less of a burden to require having the latest version on the classpath. (Without forward compatibility, the latest version of the artifact must be on the run-time classpath to avoid linkage errors.)

Finally, Scala 2.11.0 introduces scala-library-all to aggregate the modules that constitute a Scala release. Note that this means it does not provide forward binary compatibility, whereas the core scala-library artifact does. We consider the versions of the modules that "scala-library-all" % "2.11.x" depends on to be the canonical ones, that are part of the official Scala distribution. (The distribution itself is defined by the new scala-dist maven artifact.)

### New features in the 2.11 series

This release contains all of the bug fixes and improvements made in the 2.10 series, as well as:

• Collections

• Immutable HashMaps and HashSets perform faster filters, unions, and the like, with improved structural sharing (lower memory usage or churn).
• Mutable LongMap and AnyRefMap have been added to provide improved performance when keys are Long or AnyRef (performance enhancement of up to 4x or 2x respectively).
• BigDecimal is more explicit about rounding and numeric representations, and better handles very large values without exhausting memory (by avoiding unnecessary conversions to BigInt).
• List has improved performance on map, flatMap, and collect.
• See also Deprecation above: we have slated many classes and methods to become final, to clarify which classes are not meant to be subclassed and to facilitate future maintenance and performance improvements.
• Modularization

• The core Scala standard library jar has shed 20% of its bytecode. The modules for xml, parsing, swing as well as the (unsupported) continuations plugin and library are available individually or via scala-library-all. Note that this artifact has weaker binary compatibility guarantees than scala-library – as explained above.
• The compiler has been modularized internally, to separate the presentation compiler, scaladoc and the REPL. We hope this will make it easier to contribute. In this release, all of these modules are still packaged in scala-compiler.jar. We plan to ship them in separate JARs in 2.12.x.
• Reflection, macros and quasiquotes

• Please see this detailed changelog that lists all significant changes and provides advice on forward and backward compatibility.
• See also this summary of the experimental side of the 2.11 development cycle.
• #3321 introduced Sprinter, a new AST pretty-printing library! Very useful for tools that deal with source code.
• Back-end

• The GenBCode back-end (experimental in 2.11). See @magarciaepfl’s extensive documentation.
• A new experimental way of compiling closures, implemented by @JamesIry. With -Ydelambdafy:method anonymous functions are compiled faster, with a smaller bytecode footprint. This works by keeping the function body as a private (static, if no this reference is needed) method of the enclosing class, and at the last moment during compilation emitting a small anonymous class that extends FunctionN and delegates to it. This sets the scene for a smooth migration to Java 8-style lambdas (not yet implemented).
• Branch elimination through constant analysis #2214
• Compiler Performance

• Incremental compilation has been improved significantly. To try it out, upgrade to sbt 0.13.2-M2 and add incOptions := incOptions.value.withNameHashing(true) to your build! Other build tools are also supported. More info at this sbt issue – that’s where most of the work happened. More features are planned, e.g. class-based tracking.
• We’ve been optimizing the batch compiler’s performance as well, and will continue to work on this during the 2.11.x cycle.
• Improve performance of reflection SI-6638
• REPL

• Warnings * Warn about unused private / local terms and types, and unused imports, under -Xlint. This will even tell you when a local var could be a val.

• Slimming down the compiler

• The experimental .NET backend has been removed from the compiler.
• Scala 2.10 shipped with new implementations of the Pattern Matcher and the Bytecode Emitter. We have removed the old implementations.
• Search and destroy mission for ~5000 chunks of dead code. #1648

Scala is now distributed under the standard 3-clause BSD license. Originally, the same 3-clause BSD license was adopted, but slightly reworded over the years, and the “Scala License” was born. We’re now back to the standard formulation to avoid confusion.

### Eric Torreborre

#### Streaming with previous and next

<status class="ok">

The Scalaz streams library is very attractive but it might feel unfamiliar because this is not your standard collection library.

This short post shows how to produce a stream of elements from another stream so that we get a triplet with: the previous element, the current element, the next element.

### With Scala collections

With regular Scala collections, this is not too hard. We first create a list of all the previous elements. We create them as options because there will not be a previous element for the first element of the list. Then we create a list of next elements (also a list of options) and we zip everything with the input list:

</status><status class="ok">
def withPreviousAndNext[T] = (list: List[T]) => {  val previousElements = None +: list.map(Some(_)).dropRight(1)  val nextElements     = list.drop(1).map(Some(_)) :+ None  // plus some flattening of the triplet  (previousElements zip list zip nextElements) map { case ((a, b), c) => (a, b, c) }}withPreviousAndNext(List(1, 2, 3))

> List((None,1,Some(2)), (Some(1),2,Some(3)), (Some(2),3,None))

### And streams

The code above can be translated pretty straightforwardly to scalaz processes:

</status><status class="ok">
def withPreviousAndNext[F[_], T] = (p: Process[F, T]) => {  val previousElements = emit(None) fby p.map(Some(_))  val nextElements     = p.drop(1).map(Some(_)) fby emit(None)  (previousElements zip p zip nextElements).map { case ((a, b), c) => (a, b, c) }}val p1 = emitAll((1 to 3).toSeq).toSourcewithPreviousAndNext(p1).runLog.run

> Vector((None,1,Some(2)), (Some(1),2,Some(3)), (Some(2),3,None))

However what we generally want with streams is combinators which you can pipe onto a given Process. We want to write

def withPreviousAndNext[T]: Process1[T, T] = ???val p1 = emitAll((1 to 3).toSeq).toSource// produces the stream of (previous, current, next)p1 |> withPreviousAndNext

How can we write this?

### As a combinator

The trick is to use recursion to keep state and this is actually how many of the process1 combinators in the library are written. Let's see how this works on a simpler example. What happens if we just want a stream where elements are zipped with their previous value? Here is what we can write:

</status><status class="ok">
def withPrevious[T]: Process1[T, (Option[T], T)] = {  def go(previous: Option[T]): Process1[T, (Option[T], T)] =    await1[T].flatMap { current =>      emit((previous, current)) fby go(Some(current))    }  go(None)}val p1 = emitAll((1 to 3).toSeq).toSource(p1 |> withPrevious).runLog.run

> Vector((None,1), (Some(1),2), (Some(2),3))

Inside the withPrevious method we recursively call go with the state we need to track. In this case we want to keep track of each previous element (and the first call is with None because there is no previous element for the first element of the stream). Then go awaits a new element. Each time there is a new element, we emit it, then call recursively go which is again going to wait for the next element, knowing that the new previous element is now current.

We can do something similar, but a bit more complex for withNext:

</status><status class="ok">
def withNext[T]: Process1[T, (T, Option[T])] = {  def go(current: Option[T]): Process1[T, (T, Option[T])] =    await1[T].flatMap { next =>      current match {        // accumulate the first element        case None    => go(Some(next))        // if we have a current element, emit it with the next        // but when there's no more next, emit it with None        case Some(c) => (emit((c, Some(next))) fby go(Some(next))).orElse(emit((c, None)))      }    }  go(None)}val p1 = emitAll((1 to 3).toSeq).toSource(p1 |> withNext).runLog.run

> Vector((1,Some(2)), (2,Some(3)), (2,None))

Here, we start by accumulating the first element of the stream, and then, when we get to the next, we emit both of them. And we make a recursive call remembering what is now the current element. But the process we return in flatMap has an orElse clause. It says "by the way, if you don't have anymore elements (no more next), just emit current and None".

Now with both withPrevious and withNext we can create a withPreviousAndNext process:

</status><status class="ok">
def withPreviousAndNext[T]: Process1[T, (Option[T], T, Option[T])] = {  def go(previous: Option[T], current: Option[T]): Process1[T, (Option[T], T, Option[T])] =    await1[T].flatMap { next =>      current.map { c =>        emit((previous, c, Some(next))) fby go(Some(c), Some(next))      }.getOrElse(          go(previous, Some(next))        ).orElse(emit((current, next, None)))    }  go(None, None)}val p1 = emitAll((1 to 3).toSeq).toSource(p1 |> withPreviousAndNext).runLog.run
</status><status class="ok">

> Vector((None,1,Some(2)), (Some(1),2,Some(3)), (Some(2),3,None))

The code is pretty similar but this time we keep track of both the "previous" element and the "current" one.

### emit(last paragraph)

I hope this will help beginners like me to get started with scalaz-stream and I'd be happy if scalaz-stream experts out there leave comments if there's anything which can be improved (is there an effective way to combine withPrevious and withNext to get withPreviousAndNext?

I finally need to add that, in order to get proper performance/side-effect control for the withNext and withPreviousAndNext processes you need to use the lazy branch of scalaz-stream. It contains a fix for orElse which prevents it to be evaluated more than necessary.

</status>

### Functional Jobs

#### Developer at Northwestern University (Full-time)

The NetLogo team at Northwestern University (near Chicago) is hiring a full-time developer.

This might interest you if you want to:

• work with researchers at a university
• make things for kids, teachers, and scientists
• write Scala and CoffeeScript
• hack on compilers and interpreters
• do functional programming
• use the Play framework
• write open source software
• do your work on GitHub (https://github.com/NetLogo)

The CCL is looking for a full-time developer to work on NetLogo, focusing on designing web based modeling applications in Javascript, and programming (Scala and/or Java) of the NetLogo desktop application, including GUI work.

This Software Developer position is based at Northwestern University’s Center for Connected Learning and Computer-Based Modeling (CCL), working in a small collaborative development team in a university research group that also includes professors, postdocs, graduate students, and undergraduates, supporting the needs of multiple research projects. A major focus would be on development of NetLogo, an open-source modeling environment for both education and scientific research. CCL grants also involve development work on HubNet and other associated tools for NetLogo, including research and educational NSF grants involving building NetLogo-based science curricula for schools.

NetLogo is a programming language and agent-based modeling environment. The NetLogo language is a dialect of Logo/Lisp specialized for building agent-based simulations of natural and social phenomena. NetLogo has many thousands of users ranging from grade school students to advanced researchers. A collaborative extension of NetLogo, called HubNet, enables groups of participants to run participatory simulation activities in classrooms and distributed participatory simulations in social science research.

Specific Responsibilities:

• Collaborates with the NetLogo development team in designing features for NetLogo, HubNet and web-based versions of these applications;
• Writes code independently, and in the context of a team of experienced software engineers and principal investigator;
• Creates, updates and documents existing models using NetLogo, HubNet and web-based applications;
• Creates new such models;
• Supports development of new devices to interact with HubNet;
• Interacts with commercial and academic partners to help determine design and functional requirements for NetLogo and HubNet;
• Interacts with user community including responding to bug reports, questions, and suggestions, and interacting with open-source contributors;
• Performs data collection, organization, and summarization for projects;
• Assists with coordination of team activities;
• Performs related duties as required or assigned.

Minimum Qualifications:

• A bachelor's degree in computer science or a closely related field or the equivalent combination of education, training and experience from which comparable skills and abilities may be acquired;
• Enthusiasm for writing clean, modular, well-tested code.

Desirable Qualifications:

• Experience with working effectively as part of a small software development team, including close collaboration, distributed version control, and automated testing;
• Experience with building web-based applications, both server-side and client-side components, particularly with html5 and JavaScript and/or CoffeeScript;
• Experience with at least one JVM language such as Java;
• Experience with Scala programming, or enthusiasm for learning it;
• Experience designing and working with GUIs, including the Swing toolkit;
• Experience with Haskell, Lisp, or other functional languages;
• Interest in and experience with programming language implementation, functional programming, and metaprogramming;
• Experience with GUI design; language design and compilers; Interest in and experience with computer-based modeling and simulation, especially agent-based simulation;
• Interest in and experience with distributed, multiplayer, networked systems like HubNet;
• Experience working on research projects in an academic environment;
• Experience with open-source software development and supporting the growth of an open-source community; experience with Unix system administration;
• Interest in education and an understanding of secondary school math and science content.

The Northwestern campus is in Evanston, Illinois on the Lake Michigan shore, adjacent to Chicago and easily reachable by public transportation.

Get information on how to apply for this position.

### Quoi qu'il en soit

Good watch:

# UTF8 Unicode described in 10 minutes

Perfect!

## February 24, 2014

### scala-lang.org

#### Scala 2.10.4-RC3 is now available!

We are very happy to announce the third release candidate of Scala 2.10.4! If no serious blocking issues are found this will become the final 2.10.4 version.

The release is available for download from scala-lang.org or from Maven Central.

The Scala team and contributors fixed 31 issues since 2.10.3!

In total, 39 RC1 pull requests, 12 RC2 pull requests and 3 RC3 pull requests were merged on GitHub.

### Known Issues

Before reporting a bug, please have a look at these known issues.

### Scala IDE for Eclipse

The Scala IDE with this release built right in is available through the following update-site:

Have a look at the getting started guide for more info.

### New features in the 2.10 series

Since 2.10.4 is strictly a bug-fix release, here’s an overview of the most prominent new features and improvements as introduced in 2.10.0:

• Value Classes

• Implicit Classes

• String Interpolation

• Futures and Promises

• Dynamic and applyDynamic

• Dependent method types:

• def identity(x: AnyRef): x.type = x // the return type says we return exactly what we got
• New ByteCode emitter based on ASM

• Can target JDK 1.5, 1.6 and 1.7

• Emits 1.6 bytecode by default

• Old 1.5 backend is deprecated

• A new Pattern Matcher

• rewritten from scratch to generate more robust code (no more exponential blow-up!)

• code generation and analyses are now independent (the latter can be turned off with -Xno-patmat-analysis)

• Implicits (-implicits flag)

• Diagrams (-diagrams flag, requires graphviz)

• Groups (-groups)

• Modularized Language features

• Parallel Collections are now configurable with custom thread pools

• Akka Actors now part of the distribution

• scala.actors have been deprecated and the akka implementation is now included in the distribution.

• See the actors migration project for more information.

• Performance Improvements

• Faster inliner

• Range#sum is now O(1)

• Update of ForkJoin library

• Fixes in immutable TreeSet/TreeMap

• Improvements to PartialFunctions

• Addition of ??? and NotImplementedError

• Addition of IsTraversableOnce + IsTraversableLike type classes for extension methods

• Deprecations and cleanup

• Floating point and octal literal syntax deprecation

• Removed scala.dbc

### Experimental features

The API is subject to (possibly major) changes in the 2.11.x series, but don’t let that stop you from experimenting with them! A lot of developers have already come up with very cool applications for them. Some examples can be seen at http://scalamacros.org/news/2012/11/05/status-update.html.

#### A big thank you to all the contributors!

#Author
26<notextile>Jason Zaugg</notextile>
5<notextile>Eugene Burmako</notextile>
3<notextile>A. P. Marki</notextile>
3<notextile>Simon Schaefer</notextile>
3<notextile>Mirco Dotta</notextile>
3<notextile>Luc Bourlier</notextile>
2<notextile>Paul Phillips</notextile>
2<notextile>François Garillot</notextile>
1<notextile>Mark Harrah</notextile>
1<notextile>James Ward</notextile>
1<notextile>Heather Miller</notextile>
1<notextile>Roberto Tyley</notextile>

#### Commits and the issues they fixed since v2.10.3

Issue(s)CommitMessage
SI-79025f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
SI-82058ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
SI-8126, SI-6566806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
SI-8146, SI-8146, SI-8146, SI-8146ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
SI-6443, SI-81431baf11a<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile>
SI-81529df2dcc<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile>
SI-8111c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
SI-81112c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
SI-7120, SI-8114, SI-71205876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
SI-7636, SI-6563255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
SI-8104, SI-8104c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
SI-80857e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
SI-8085a12dd9c<notextile>Test demonstrating SI-8085</notextile>
SI-642647562e7<notextile>Revert "SI-6426, importable _."</notextile>
SI-8062f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
SI-7912006e2f2<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile>
SI-8060bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
SI-79955ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
SI-8019c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
SI-8029fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
SI-74398d74fa0<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile>
SI-80109036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
SI-79827d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
SI-69137063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
SI-745802308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
SI-7548652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
SI-7548b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
SI-80053629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
SI-8004696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
SI-7463, SI-8003b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
SI-7280053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
SI-791504df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
SI-7776d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
SI-6546075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
SI-7638, SI-4012e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
SI-751950c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
SI-7519ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
SI-4936, SI-6026e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
SI-60262bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
SI-729525bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
SI-70207b56021<notextile>Disable tests for SI-7020</notextile>
SI-77832ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
SI-7815733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

#### Complete commit list!

shaTitle
5f4011e<notextile>[backport] SI-7902 Fix spurious kind error due to an unitialized symbol</notextile>
8ee165c<notextile>SI-8205 [nomaster] backport test pos.lineContent</notextile>
d167f14<notextile>[nomaster] corrects an error in reify’s documentation</notextile>
806b6e4<notextile>Backports library changes related to SI-6566 from a419799</notextile>
ff13742<notextile>[nomaster] SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
cbb88ac<notextile>[nomaster] Update MiMa and use new wildcard filter</notextile>
1baf11a<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile>
9df2dcc<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile>
c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
2c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
5876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
bd4adf5<notextile>More clear implicitNotFound error for ExecutionContext</notextile>
255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
7e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
a12dd9c<notextile>Test demonstrating SI-8085</notextile>
3fa2c97<notextile>Report error on code size overflow, log method name.</notextile>
2aa9da5<notextile>Partially revert f8d8f7d08d.</notextile>
47562e7<notextile>Revert "SI-6426, importable _."</notextile>
f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
9cdbe28<notextile>Fixup #3248 missed a spot in pack.xml</notextile>
006e2f2<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile>
bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
e555106<notextile>Remove docs/examples; they reside at scala/scala-dist</notextile>
dc6dd58<notextile>Remove unused android test and corresponding license.</notextile>
f8d8f7d<notextile>Do not distribute partest and its dependencies.</notextile>
5ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
8d74fa0<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile>
9036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
3faa2ee<notextile>[nomaster] better error messages for various macro definition errors</notextile>
7d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
7063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
02308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
0c963c9<notextile>[nomaster] teaches toolbox about -Yrangepos</notextile>
3629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
eb9f0f7<notextile>[nomaster] Adds test cases for scope completion</notextile>
3a8796d<notextile>[nomaster] Test infrastructure for scope completion</notextile>
04df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
ec89b59<notextile>Upgrade pax-url-aether to 1.6.0.</notextile>
1d29c0a<notextile>[backport] Add buildcharacter.properties to .gitignore.</notextile>
852a947<notextile>Allow retrieving STARR from non-standard repo for PR validation</notextile>
40af1e0<notextile>Allow publishing only core (pr validation)</notextile>
ba0718f<notextile>Render relevant properties to buildcharacter.properties</notextile>
d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
6045a05<notextile>Fix completion after application with implicit arguments</notextile>
075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
50c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
2bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
25bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
7b56021<notextile>Disable tests for SI-7020</notextile>
8986ee4<notextile>Disable flaky presentation compiler test.</notextile>
2ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
ee9138e<notextile>Bump version to 2.10.4 for nightlies</notextile>
733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

### Functional Jobs

#### functional software developer at OpinionLab (Full-time)

functional software developer OpinionLab is seeking a Software Developer with strong agile skills to join our Chicago, IL based Product Development team in the West Loop.

As a member of our Product Development team, you will play a critical role in the architecture, design, development, and deployment of OpinionLab’s web-based applications and services. You will be part of a high-visibility agile development team empowered to deliver high-quality, innovative, and market leading voice-of-customer (VoC) data acquisition and feedback intelligence solutions. If you thrive in a collaborative, fast-paced, get-it-done environment and want to be a part of one of Chicago’s most innovative companies, we want to speak with you!

Key Responsibilities include:

• Development of scalable data collection, storage, processing & distribution platforms & services.
• Architecture and design of a mission critical SaaS platform and associated APIs.
• Usage of and contribution to open-source technologies and framework.
• Collaboration with all members of the technical staff in the delivery of best-in-class technology solutions.
• Proficiency in Unix/Linux environments.
• Work with UX experts in bringing concepts to reality.
• Bridge the gap between design and engineering.
• Participate in planning, review, and retrospective meetings (à la Scrum).

Desired Skills & Experience:

• BDD/TDD, Pair Programming, Continuous Integration, and other agile craftsmanship practices
• Desire to learn Clojure (if you haven’t already)
• Experience with both functional and object-oriented design and development within an agile environment
• Polyglot programmer with mastery of one or more of the following languages: Lisp (Clojure, Common Lisp, Scheme), Haskell, Scala, Python, Ruby, JavaScript
• Experience delivering real-time, distributed systems in large scale production environments
• Knowledge of one or more of: AWS, Lucene/Solr/Elasticsearch, Storm, Chef
• Familiarity with Java, Clojure, Ruby and/or Python ecosystems Database experience, including but not limited to RDBMSs like PostgreSQL, Oracle, etc.
• Experience with design and development of externally facing RESTful APIs
• Familiarity with message-based (RabbitMQ, 0MQ or similar), asynchronous, and event-driven architectures
• Ability to thrive in informal and relaxed environments
• Fluency with DVCSs like Git/GitHub and/or Bitbucket
• Ruby on Rails development (version 3+)
• Experience with JS and CSS compiling, linting, minifying, cache busting
• Experience with Cross-Browser, responsive Design meeting Accessibility Standards (508, WCAG)
• Knowledge of one or more modern CSS frameworks (e.g., Bootstrap, Foundation, Bourbon, Neat)
• Knowledge of one or more modern JavaScript frameworks (Backbone, Ember, Angular, Knockout, MVC)

Compensation:

• Commensurate with experience.
• Generous benefits include medical, dental, life and disability insurances, paid holidays, vacation and sick days, 401K with employer match, FSA plan

Get information on how to apply for this position.

#### Platform Engineer at Signal Vine LLC (Full-time)

Signal Vine, LLC is an exciting early-stage company with customers and revenue which is growing quickly and looking for our next technical hire. We are building an incredible company that helps solve social issues with technology. We recognize that the key to our growth and success is hiring great people. Our ideal candidate for this position is someone who has the enthusiasm for creative problem solving, but maintains the dedication and focus to achieve desired results.

We have built a communication platform for educational organizations to reach today's youth. Our platform combines text messaging with data analytics to deliver a highly personalized and interactive experience for students. Our platform helps educational organizations achieve their goals by allowing them to engage their students in a way that is natural and easy for both parties.

Our stack is Haskell, Ruby, Git, Ubuntu and we are primarily looking for a Haskeller with a deep understanding of computer language development to help build the next generation of the Signal Vine platform. Your main focus will be building our custom DSL, you will also be expected to work on all aspects of the Signal Vine tech platform, including tasks such as maintaining our Ruby on Rails application as necessary, responding to customer support requests, and preparing demo content. Further, we expect you to maintain a holistic view of how technology can be used to further the Signal Vine vision.

You...

• Have built release quality software with Haskell
• Are able to create custom DSLs
• Can do self directed work
• Work well with others
• Are intellectually honest
• Can express technical concepts to a non-technical audience
• Hold a Bachelors Degree in Mathematics, Computer Science or related
• Are trust worthy and conscientious

It’d be cool if you...

• Have experience with Ruby, Scala, AngularJs
• Are interested in dev-ops and build automation
• Have used web frameworks to build one or more applications  (i.e. RoR, NancyFx, Flask, Play, etc....)
• Can use CSS effectively
• Know Unix well
• Have public examples of projects you’ve completed
• Have published technically relevant articles, blog posts or books
• Maintain a social media presence that represents your technical interests and  ability

We will...

• Pay a competitive salary including equity and health insurance
• Buy you a shiny new MacBook Pro
• Respect your work schedule and habits by focusing on results
• Offer you a chance to go on an exciting ride as the company grows

Get information on how to apply for this position.

## February 23, 2014

### scala-lang.org

#### Google Summer of Code 2014

This year the Scala team applied again for Google Summer of Code, and we’re happy to announce that we have been approved to be mentoring organization!

### What is Google Summer of Code

Google invites students to come up with interesting, non-trivial problems for their favourite open-source projects and work on them over the summer. Participants get support from the community, plus a mentor who makes sure they don’t get lost and that students meet their goals. Aside from the satisfaction of solving challenging problems, students get paid for their work. This is an incredible opportunity to get involved in the Scala community and get helpful support along the way.

### How to get involved

First, have a look at our project ideas page. The ideas there are meant as suggestions to serve as a starting point. We expect students to explore the ideas in much more detail, preferably with their own suggestions and detailed plans on how they want to proceed. But don’t feel constrained by the provided list! We welcome any challenging project idea pertaining to Scala!

The best place to propose and discuss your proposals is our “scala-language” mailing list. This way you will get quickly responses from the whole Scala community. If you know of a potential mentor, it also might be a good idea to also include them in your discussion on the scala-language mailing list. If not, don’t be afraid to ask who you might be able to contact in your discussion on scala-language.

### Previous Summer of Code

We encourage you to have a look at our Summer of Code 2010, 2011, 2012 and 2013 pages to get an idea of kind of projects we undertook in previous years.

### Daniel Sobral

#### Two Thirds

This is not my usual programming-related blog post. I decided to blog about books I have been reading.

I'm a long time fan of the Honor Harrington Series, a military science fiction series that draws on the spirit of 17~19 century naval series such as Horatio Hornblower or Aubrey-Maturin (from which sprang the movie Master and Commander: The Far Side of the World) . These days, however, there are enough secondary stories in that universe that stories advancing the main plot are rather hard to come by. Though, on the other hand, one could say that the original story has finally concluded, and what's going on now is a new story.

For a bit, I tried to turn to a follow up on what is possibly my favorite fantasy trilogy of books, The Deed of Paksenarrion. Elizabeth Moon returned to the series with Oath of Fealty, followed by other books, but they pale in comparison with the original, which was a quite believable, and somewhat moving, story of the daughter of a sheep farmer on the back beyond who becomes a paladin.

So, in despair, I tried searching for other stuff. First I came upon The Kingkiller Chronicles, feeling somewhat like the last one to know of it (and if you didn't know of it you should immediately get The Name of the Wind and The Wise Man's Fear). So, that's three books of which only two are written. This is fantasy, but, honestly, that's beside the point -- it is the prose and the attention to detail that make these books great reading.

Back to waiting, I looked around and found The Golden Threads Trilogy, a mix of fantasy and science fiction (though the latter mainly from the second book) story that's quite clever. I particularly love how everyone in the first book, Thread Slivers, has a different conception of what's going on and what other people want. It's highly amusing. The second book, Thread Strands, sadly decreases the fog of war factor, and leads to... well, I'll have to wait for the third book to get published to find out. Again.

As I waited, I noticed that the March Upcountry series by John Ringo was getting combo-treatment, with March Upcountry and March to the Sea being bundled in Empire of Man. It seems March to the Stars and We Few, the fourth and final book, will be out in a combo soon as well. Anyway, this is military science fiction pitting commandos against dinosaurs and spear-wielding aliens. What's not to like? :)

Now, after I re-read these books, I decided to search for other stuff by John Ringo, and came upon Black Tide Rising, a zombie series. This is one of the "realistic zombies" kind of series, where people aren't really zombies, just infected with a rabies-like virus. It tries to be realistic in the portrayal of how people survive and fight back as well, though its world is rather lighter than I feel is realistic. I don't mind though: I prefer more cheerful worlds, even in a zombie apocalypse, than what I think is realistic. :)

Anyway, I read Under a Graveyard Sky in a single day, then followed up with To Sail a Darkling Sea a little slower... but only because, damn!, that's it for now. And it's not even going to be a trilogy! As a bonus, the first book comes with a "zombie clearance playlist" -- nice! :)

#### A Completely Objective Observation

When you talk to Scala programmers and use the word “object”, they hear “composable parameterized namespace that is a first-class value”.

When use the word “object” in front of Haskell programmers, they hear “Voldemort”.

## February 14, 2014

### Paul Chiusano

#### The reactive manifesto is 'not even wrong'

I am sure the authors and signers of the reactive manifesto are well-meaning people, but the entire tone and concept of the site is just wrong.

On a technical level, though, the reactive manifesto is too vague to critique. I could try to interpret and respond to some of the vague claims that seem wrong or silly (for instance, I detect some confusion between an asynchronous API and an asynchronous implementation), but then I fully expect defenders to define away the criticism or say I've misinterpreted things. Its arguments hold up only by being not even wrong.

When you cut through the bullshit, it seems the only actual information content are inane or tautological statements like "being resillient to failure is good" and "one shouldn't waste OS threads by blocking". Do we really need a manifesto to tell us these things? Of course not.

But the point of the reactive manifesto is not to make precise technical claims. Technical arguments don't need manifestos or rallying cries. Imagine the ridiculousness of creating quicksortmanifesto.org to rally people around O(n*log n) sorting algorithms as opposed to O(n^2) algorithms.

No, the reactive manifesto is a piece of pop culture, which I mean in the sense used by Alan Kay:

Computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were. So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

In the reactive manifesto, one is invited to join a movement and rally around a banner of buzzwords and a participatory, communal cloud of vagueness. Well, I don't want to join such a movement, and the pop culture and tribalism of our industry is something I'd like to see go away.

I would welcome some interesting precise claims and arguments (that aren't inane truisms) about how to build robust large systems (there may even be the seeds of some nuggets of truth somewhere in the reactive manifesto). But let's not make it a manifesto, please!

Update: I recently received a note from a recruiter, which contained the following gem:

We checked out your projects on GitHub and we are really impressed with your Scala skills. Interested in solving hard problems? Does designing and building massively scaling, event-driven systems get you excited? Do you believe in the reactive manifesto? Let’s talk.

</body>

## February 12, 2014

### Kris Nuttycombe

#### The Abstract Future

This post is being resurrected from the dustbin of history - it was originally posted on the precog.com engineering blog, which has since been lost to acquisition and bitrot. My opinion on the applicability of the following techniques has changed somewhat since I originally wrote this post; I will address this in a followup post in the near future. Briefly, though, I believe that parameterizing each method with an implicit monad constraint is preferable where possible; it provides the user with greater flexibility.

In our last blog post on Precog development, Daniel wrote about how we use the Cake Pattern to structure our codebase and to leave the implementation types abstract as long as possible. As he showed in that post, this is an extremely powerful concept; by keeping a type existential, values of that type remain opaque to any modules that aren’t “aware” of the eventual type chosen, and so are prevented by the compiler from breaking encapsulation boundaries.

In today’s post, we’re going to extend this notion beyond types to handle type constructors, and in so doing will show a mechanism that allows us to switch out entire models of computation.

If you’ve been working with Scala for any length of time, you’ve undoubtedly heard the word “monad” floating around in one context or another, perhaps in a discussion about the syntactic sugar provided by Scala’s ‘for’ keyword or a blog post discussing how the Option type can be used to avoid the pitfalls of null references. While a significant amount of discussion of monads in Scala focuses on the “container” types, a few types common in the Scala ecosystem display a more interesting facet of monadic composition – delimited computation. While all monadic types exhibit this in composition, perhaps the most commonly used monadic type in Scala that exemplifies this sort of use directly is akka.dispatch.Future, (which is scheduled to replace Scala’s current Future interface in the standard library in Scala 2.10) which encodes asynchronous computation. It embodies the aspect of monadic composition that we’re most concerned with here by providing a flexible way to order the steps of a computation.

I’d like to step back a moment here and state that this post isn’t intended to function as a monad tutorial; there are numerous (perhaps too many!) articles about monads, and their relevance to programming in Scala exist elsewhere. If you’re new to the concept it will be useful for you to take advantage of one or more of these resources before continuing here. It is, however, important to point out at first that the use of monads in Scala, while pervasive (as evidenced by the presence of ‘for’ as syntactic sugar for monadic composition) is somewhat idiosyncratic in that the Scala standard libraries actually provide no Monad type. For this, we have to look outside of the standard library to the excellent scalaz project. Scalaz’s encoding of the monadic abstraction relies upon the implicit typeclass pattern. The base Monad type is shown here, simplified, for reference:

trait Monad[M[_]] {  def point[A](a: => A): M[A]  def bind[A, B](m: M[A])(f: A => M[B]): M[B]  def map[A, B](m: M[A])(f: A => B): M[B] = bind(m)(a => point(f(a))) }

You’ll note that the Monad trait is not parameterized by a specific type, but instead a type constructor of one argument. The methods defined inside of Monad are then parametrically polymorphic, which means that they must provide a specific type to be inserted into the “hole” at the invocation point. This will be important later, when we talk about how to actually take advantage of this abstraction.

Scalaz provides implementations of this type for most of the monadic types in the Scala standard library, as well as several more sophisticated monadic types, which we’ll return to in a moment. For now, however let’s talk a bit about Akka’s Futures.

An Akka Future represents a computation whose value is produced asynchronously, and which may fail. Also, as I noted before, akka.dispatch.Future is monadic; that is, it is a type for which the Monad trait above can be trivially implemented and which satisfies the monad laws, and so it provides an extremely useful primitive for composing asynchronous computations without all sorts of tedious mucking about with manual management of threads and shared mutable state. At Precog, we use Futures extensively, both in a direct fashion and to allow us a composable way to interact with subsystems that are implemented atop Akka’s actor framework. Futures are arguably one of the best tools we have for reining in the complexity of asynchronous programming, and so our many of our early versions of APIs in our codebase exposed Futures directly. For example, here’s a snippet of one of our internal APIs, which follows the Cake pattern as described previously.

trait DatasetModule {  type Dataset   trait DatasetLike {    /** The members of this dataset will be used to determine what sets to        load, and the resulting sets will be unioned together */    def load: Future[Dataset]    /** Sorts the dataset by the specified value function. */    def sort(sortBy: /*...*/): Future[Dataset]    /** Retains a prefix of this dataset. */    def take(size: Int): Dataset    /** Map members of the dataset into the A type using the specified value         function, then combine using the resulting monoid */    def reduce[A: Monoid](mapTo: /*...*/): Future[A]  }}

The Dataset type here is something of a strawman, but is loosely representative of the type that we use internally to represent an intermediate result of a computation - a lazy data structure with a number of operations that can be used to manipulate it, some of which may involve actually evaluating a function over the entire dataset and which may involve I/O, distributed evaluation, and asynchronous computation. Based on this interface, it’s easy to see that evaluation of some query with respect to a dataset might involve a load, a sort, taking a prefix, and a reduction of that prefix. Moreover, such an evaluation will not rely upon anything except the monadic nature of Future to compose its steps. What this means is that from the perspective of the consumer of the DatasetModule interface, the only aspect of Future that we’re relying upon is the ability to order operations in a statically checked fashion; the sequencing, rather than any particular semantics related to Future’s asynchrony, is the relevant piece of information provided by the type. So, the following generalization becomes natural:

trait DatasetModule[M[+_]] {  type Dataset   implicit def M: Monad[M]  trait DatasetLike {    /** The members of this dataset will be used to determine what sets to        load, and the resulting sets will be unioned together */    def load: M[Dataset]    /** Sorts the dataset by the specified value function. */    def sort(sortBy: /*...*/): M[Dataset]    /** Retains a prefix of this dataset. */    def take(size: Int): Dataset    /** Map members of the dataset into the A type using the specified value         function, then combine using the resulting monoid */    def reduce[A: Monoid](mapTo: /*...*/): M[A]  }}

and, of course, down the road some concrete implementation of DatasetModule will refine the type constructor M to be Future:

/** The implicit ExecutionContext is necessary for the implementation of     M.point */class FutureMonad(implicit executor: ExecutionContext) extends Monad[Future] {  override def point[A](a: => A): Future[A] = Future { a }  override def bind[A, B](m: Future[A])(f: A => Future[B]): Future[B] =     m flatMap f}abstract class ConcreteDatasetModule(implicit executor: ExecutionContext) extends DatasetModule[Future] {  val M: Monad[Future] = new FutureMonad }

In practice, we may actually leave M abstract until “the end of the universe.” In the Precog codebase, the M type will frequently represent the bottom of a stack of monad transformers including StateT, StreamT, EitherT and others that the actual implementation of the Dataset type depends upon.

This generalization has numerous benefits. First, as with the previous examples of our use of the Cake pattern, consumers of the DatasetModule trait are completely and statically insulated from irrelevant details of the implementation type. An important such consumer is a test suite. In a test, we probably don’t want to worry about the fact that the computation is being performed asynchronously; all that we care about is that we obtain a correct result. If our M is in fact at the bottom of a transformer stack, we can trivially replace it with the identity monad and use the “copointed” nature of this monad (the ability to “extract” a value from the monadic context). This allows us to build a similarly generic test harness:

/** Copointed is available from scalaz as well; reproduced here for clarity */trait Copointed[M[_]] {  /** Extract and return the value from the enclosing context. */  def copoint[A](m: M[A]): A}trait TestDatasetModule[M[+_]] extends DatasetModule {  implicit def M: Monad[M] with Copointed[M]  //... utilities for test dataset generation, stubbing load/sort, etc.}

For most cases, we’ll use the identity monad for testing. Suppose that we’re testing the piece of functionality described earlier, which has computed a result from the combination of a load, a sort, take and reduce. The test framework need never consider the monad that it’s operating in.

import scalaz._import scalaz.syntax.monad._import scalaz.syntax.copointed._class MyEvaluationSpec extends Specification {  val module = new TestDatasetModule[Id] with ConcreteDatasetModule[Id] {     val M = Monad[Id] // the monad for Id is copointed in Scalaz.  }    “evaluation” should {    “determine the correct result for the load/sort/take/reduce case” in {      val loadFrom: module.Dataset = //...      val expected: Int = //...      val result = for {        ds         sorted - ds.sortBy(mySortFun)        prefix = sorted.take(10)        value - prefix.reduce[Int]myCountFunc)      } yield value      result.copoint must_== expected    }  }}

In the case that we have a portion of the implementation that actually depends upon the specific monadic type (say, for example, that our sort implementation relies on Akka actors and the “ask” pattern under the hood, so that we’re using Futures) we can simply encode this in our test in a straightforward fashion:

abstract class TestFutureDatasetModule(implicit executor: ExecutionContext)extends TestDatasetModule[Future] {  def testTimeout: akka.util.Duration  object M extends FutureMonad(executor) with Copointed[Future] {    def copoint[A](m: Future[A]): A = Await.result(m, testTimeout)  }}

Future is, of course, not properly copointed (since Await can throw an exception) but for the purposes of testing (and testing exclusively) this construction is ideal. As before, we get exactly the type that we need, statically determined, at exactly the place that we need it.

In practice, we’ve found that abstracting away the particular monad that our code is concerned with has aided tremendously with keeping the concerns of different parts of our codebase well isolated, and ensuring that we’re simply not able to sidestep the sequencing requirements that are necessary to make a large, functional codebase work together as a coherent whole. As an added benefit, many parts of our application that were not initially designed thinking in terms of parallel execution are able to execute concurrently. For example, in many cases we’ll be computing a List[M[...]] and then using the “sequence” function provided by scalaz.Traverse to turn this into an M[List[...]] - and when M is future, each element may be computed in parallel, with the final sequenced result becoming available only when all the computations to produce the members of the list are complete. And, ultimately, even this merely touches the surface of a deep pool of composing our computation that is made possible by making this abstraction.

## February 03, 2014

### scala-lang.org

#### Scala 2.10.4-RC2 is now available!

We are very happy to announce the second release candidate of Scala 2.10.4! If no serious blocking issues are found this will become the final 2.10.4 version.

The release is available for download from scala-lang.org or from Maven Central.

The Scala team and contributors fixed 23 issues since 2.10.3!

In total, 39 RC1 pull requests and 12 RC2 pull requests were merged on GitHub.

### Known Issues

Before reporting a bug, please have a look at these known issues.

### Scala IDE for Eclipse

The Scala IDE with this release built right in is available through the following update-site:

Have a look at the getting started guide for more info.

### New features in the 2.10 series

Since 2.10.4 is strictly a bug-fix release, here’s an overview of the most prominent new features and improvements as introduced in 2.10.0:

• Value Classes

• Implicit Classes

• String Interpolation

• Futures and Promises

• Dynamic and applyDynamic

• Dependent method types:

• def identity(x: AnyRef): x.type = x // the return type says we return exactly what we got
• New ByteCode emitter based on ASM

• Can target JDK 1.5, 1.6 and 1.7

• Emits 1.6 bytecode by default

• Old 1.5 backend is deprecated

• A new Pattern Matcher

• rewritten from scratch to generate more robust code (no more exponential blow-up!)

• code generation and analyses are now independent (the latter can be turned off with -Xno-patmat-analysis)

• Implicits (-implicits flag)

• Diagrams (-diagrams flag, requires graphviz)

• Groups (-groups)

• Modularized Language features

• Parallel Collections are now configurable with custom thread pools

• Akka Actors now part of the distribution

• scala.actors have been deprecated and the akka implementation is now included in the distribution.

• See the actors migration project for more information.

• Performance Improvements

• Faster inliner

• Range#sum is now O(1)

• Update of ForkJoin library

• Fixes in immutable TreeSet/TreeMap

• Improvements to PartialFunctions

• Addition of ??? and NotImplementedError

• Addition of IsTraversableOnce + IsTraversableLike type classes for extension methods

• Deprecations and cleanup

• Floating point and octal literal syntax deprecation

• Removed scala.dbc

### Experimental features

The API is subject to (possibly major) changes in the 2.11.x series, but don’t let that stop you from experimenting with them! A lot of developers have already come up with very cool applications for them. Some examples can be seen at http://scalamacros.org/news/2012/11/05/status-update.html.

#### A big thank you to all the contributors!

#Author
21<notextile>Jason Zaugg</notextile>
4<notextile>Eugene Burmako</notextile>
3<notextile>Simon Schaefer</notextile>
3<notextile>Mirco Dotta</notextile>
3<notextile>Luc Bourlier</notextile>
2<notextile>Som Snytt</notextile>
2<notextile>Paul Phillips</notextile>
1<notextile>Mark Harrah</notextile>
1<notextile>James Ward</notextile>
1<notextile>Heather Miller</notextile>
1<notextile>Roberto Tyley</notextile>
1<notextile>François Garillot</notextile>

#### Commits and the issues they fixed since v2.10.3

Issue(s)CommitMessage
SI-8111c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
SI-81112c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
SI-7120, SI-8114, SI-71205876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
SI-7636, SI-6563255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
SI-8104, SI-8104c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
SI-80857e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
SI-8085a12dd9c<notextile>Test demonstrating SI-8085</notextile>
SI-642647562e7<notextile>Revert "SI-6426, importable _."</notextile>
SI-8062f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
SI-7912006e2f2<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile>
SI-8060bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
SI-79955ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
SI-8019c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
SI-8029fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
SI-74398d74fa0<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile>
SI-80109036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
SI-79827d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
SI-69137063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
SI-745802308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
SI-7548652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
SI-7548b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
SI-80053629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
SI-8004696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
SI-7463, SI-8003b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
SI-7280053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
SI-791504df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
SI-7776d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
SI-6546075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
SI-7638, SI-4012e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
SI-751950c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
SI-7519ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
SI-4936, SI-6026e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
SI-60262bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
SI-729525bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
SI-70207b56021<notextile>Disable tests for SI-7020</notextile>
SI-77832ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
SI-7815733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

#### Complete commit list!

shaTitle
c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
2c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
5876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
bd4adf5<notextile>More clear implicitNotFound error for ExecutionContext</notextile>
255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile>
c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
7e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
a12dd9c<notextile>Test demonstrating SI-8085</notextile>
3fa2c97<notextile>Report error on code size overflow, log method name.</notextile>
2aa9da5<notextile>Partially revert f8d8f7d08d.</notextile>
47562e7<notextile>Revert "SI-6426, importable _."</notextile>
f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
9cdbe28<notextile>Fixup #3248 missed a spot in pack.xml</notextile>
006e2f2<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile>
bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
e555106<notextile>Remove docs/examples; they reside at scala/scala-dist</notextile>
dc6dd58<notextile>Remove unused android test and corresponding license.</notextile>
f8d8f7d<notextile>Do not distribute partest and its dependencies.</notextile>
5ed834e<notextile>SI-7995 completion imported vars and vals</notextile>
c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
8d74fa0<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile>
9036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
3faa2ee<notextile>[nomaster] better error messages for various macro definition errors</notextile>
7d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
7063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
02308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
0c963c9<notextile>[nomaster] teaches toolbox about -Yrangepos</notextile>
3629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
eb9f0f7<notextile>[nomaster] Adds test cases for scope completion</notextile>
3a8796d<notextile>[nomaster] Test infrastructure for scope completion</notextile>
04df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
ec89b59<notextile>Upgrade pax-url-aether to 1.6.0.</notextile>
1d29c0a<notextile>[backport] Add buildcharacter.properties to .gitignore.</notextile>
852a947<notextile>Allow retrieving STARR from non-standard repo for PR validation</notextile>
40af1e0<notextile>Allow publishing only core (pr validation)</notextile>
ba0718f<notextile>Render relevant properties to buildcharacter.properties</notextile>
d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
6045a05<notextile>Fix completion after application with implicit arguments</notextile>
075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile>
50c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
2bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
25bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
7b56021<notextile>Disable tests for SI-7020</notextile>
8986ee4<notextile>Disable flaky presentation compiler test.</notextile>
2ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
ee9138e<notextile>Bump version to 2.10.4 for nightlies</notextile>
733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

## January 30, 2014

### Paul Chiusano

#### An actually secure payments protocol, and Coin's broken security

Here is a sketch of what should happen when I pay for a meal at a restaurant:

1. Waiter brings the bill over, which contains a QR code or some such. QR code contains the name of the merchant, its public key, and an amount. I point my phone at it, up pops a screen that says "Acme Burgers Inc. would like to charge $33.04", I add a tip, bringing the total to$39.04, then press 'Authorize'. The payment app on my phone uses my private key (stored only on my phone in RAM) to sign the tuple ($39.04, timestamp, Acme Burgers Inc public key). My phone renders this signed tuple as a QR code. I get up and walk out of the restaurant, stopping at the door to scan my QR code at a reader. Acme Burgers now has information needed to charge my bank later. 2. At any point later, or perhaps immediately, Acme Burgers contacts my bank and demonstrates cryptographic proof that I have authorized a charge to them of$33.04, and said bank transfers cash money to Acme and debits my account with them. The signature prevents any other party without Acme's private key from authorizing this transfer, and Acme is prevented from replaying this later due to the timestamp / nonce.

Unlike credit cards, I don't give the merchant access to all my credit, for all time, and hope they only use the credit I've authorized. I've given them exactly the capability they need, no more. The security of the merchant's accounting systems is no longer something that concerns me.

A few notes:

• This protocol does not require internet access at the time of sale, and it's pretty damn convenient too.
• The protocol could be made anonymous, so the merchant does not even learn my public key.
• Recurring payments could work the same way, only what is being signed by me is something like "allow a charge of $7/mo, for the next 12 months", rather than a single amount. • Online payments could use the same sort of protocol. Obviously we can dispense with using QR codes as a communication channel (or not). • This is obvious stuff, and not in any way new, right? Now, why the heck doesn't something like this exist? Software lets us solve these problems better, and yet the payments industry is still using the virtual equivalent of distributing and transferring account numbers on scraps of paper. Here is what passes for innovation in the payments industry: • Coin was announced several months ago. It's a single physical card, called a 'Coin', which can store all your cards. By pressing a button on the Coin, you reprogram the magnetic strip to a different active card. Now, rather than giving the merchant access to all your credit, for all time, for one of your accounts, you can give them access to ALL of your accounts. When I pointed out this very real and very serious security issue, I got radio silence... • LevelUp lets you pay with your phone. Rather than carrying around a credit card, which gives access to all your credit, you carry around your phone with the LevelUp app, which has a QR code that gives access to all your credit... The security model is unchanged! Nothing stops the merchant or anyone who hacks the merchant's systems from repeatedly using the information from your QR code to drain your bank account. Nothing stops a nefarious bystander from grabbing your QR code and draining your bank account (they just need to have or control a merchant account with LevelUp). What's missing is the step where the user actually authorizes the charge. Bob: Oh, come on. Credit cards aren't so bad. Even if they are technically insecure, I'm not liable for fraudulent or unauthorized charges, so what difference does it make to me? Alice: Who do you think pays for all those fraudulent charges? Bob: What do you mean? My credit card company pays for it. Or the merchant pays for it. Alice: Yes, and how do you think they pay for it? Bob: Umm... Alice: It's paid for via credit card processing fees, which get passed on to you, the customer, in the form of higher average prices. The money has to come from somewhere. So actually, it's you that pays for fraudulent charges, you just pay for them as a tax on every transaction, rather than in bursts. And those are just the direct costs of fraud. The indirect costs are also significant--consider the additional time spent having to review your credit card statement more carefully due to higher probability of fraud. Consider the time spent having to update your payment information with dozens of merchants when Target or some other merchant with your information gets hacked. Consider the additional machine learning algorithms the credit card companies have to run, to analyze transactions and detect fraud. Consider the cost of the false positives in these systems, the salaries of the phone operators who have to be on hand when you call to say 'no, really, I am in Montana trying to buy some beef jerky, my card has not been stolen'. And that's just off the top of my head. This stuff isn't that hard, is it? ## January 27, 2014 ### scala-lang.org #### Scala 2.11.0-M8 is now available! We are pleased to announce the final milestone release of Scala 2.11.0! Please do try out this release to help us find any regressions before the first release candidate, which is scheduled for February 18. For production use, we recommend the latest stable release, 2.10.3 (soon 2.10.4). If your code compiled on 2.10.x without deprecation warnings, it should compile on 2.11.x. If not, please file a regression. We are working with the community to ensure availability of the core artifacts of the Scala 2.11.x eco-system. This release is not binary compatible with the 2.10.x series, so that we can keep improving the Scala standard library. Scala 2.11.0-M8 is available for download from scala-lang.org or from Maven Central. The Scala team and contributors fixed 119 issues via 174 merged pull requests! ### Reporting Bugs / Known Issues Please file any bugs you encounter. If you’re unsure whether something is a bug, please contact the scala-user mailing list. Before reporting a bug, please have a look at these known issues. ### Scala IDE for Eclipse The Scala IDE with this release built in will soon be available at the usual update-site: Have a look at the getting started guide for more info. ### New features in the 2.11 series This release contains all of the bug fixes and improvements made in the 2.10 series, as well as: • Modularization • The core Scala standard library jar has shed 20% of its bytecode. The modules for xml, parsing, and swing are available individually or via scala-library-all. • The compiler has been internally modularized, to separate the presentation compiler, scaladoc and the REPL. In this release, all of these modules are still packaged in scala-compiler.jar. We plan to ship them in separate JARs in 2.12.x. • Slimming down • The experimental .NET backend has been removed from the compiler. • In Scala 2.10.0, new implementations of the Pattern Matcher and the Bytecode Emitter were shipped. We have now removed the old implementations. • scala-actors is now deprecated; we advise users to follow the steps in the Actors Migration Guide to port to Akka Actors, which have been included in the distribution since 2.10.0. • Search and destroy mission for ~5000 chunks of dead code. #1648 • Language • Case classes with > 22 parameters are now supported SI-7296 • Infer bounds of existential types SI-1786 • REPL • Performance • Branch elimination through constant analysis #2214 • Improve performance of reflection SI-6638 • Warnings * Warn about unused private / local terms and types, and unused imports, under -Xlint. This will even tell you when a local var could be a val. (We might move these warnings to a separate command line option before the final release, your feedback is welcome here.) #### A big thank you to all the contributors! #Author 75<notextile>Jason Zaugg</notextile> 42<notextile>Eugene Burmako</notextile> 31<notextile>Adriaan Moors</notextile> 24<notextile>Den Shabalin</notextile> 15<notextile>Simon Ochsenreither</notextile> 13<notextile>Som Snytt</notextile> 11<notextile>Paul Phillips</notextile> 10<notextile>Rex Kerr</notextile> 9<notextile>Vladimir Nikolaev</notextile> 8<notextile>Mirco Dotta</notextile> 7<notextile>Miguel Garcia</notextile> 7<notextile>Rüdiger Klaehn</notextile> 5<notextile>François Garillot</notextile> 4<notextile>Simon Schaefer</notextile> 4<notextile>Luc Bourlier</notextile> 3<notextile>Denys Shabalin</notextile> 2<notextile>Olivier Blanvillain</notextile> 2<notextile>Antoine Gourlay</notextile> 2<notextile>Kenji Yoshida</notextile> 2<notextile>Christoffer Sawicki</notextile> 2<notextile>Paolo Giarrusso</notextile> 1<notextile>Erik Osheim</notextile> 1<notextile>Scott Carey</notextile> 1<notextile>James Iry</notextile> 1<notextile>Chris Hodapp</notextile> 1<notextile>James Ward</notextile> 1<notextile>Heather Miller</notextile> 1<notextile>Thomas Geier</notextile> 1<notextile>Jason Swartz</notextile> 1<notextile>Visitor</notextile> 1<notextile>Johannes Rudolph</notextile> 1<notextile>Roberto Tyley</notextile> 1<notextile>Dmitry Petrashko</notextile> #### Commits and the issues they fixed since v2.11.0-M7 Issue(s)CommitMessage SI-6443, SI-81431baf11a2bb<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile> SI-81529df2dcc584<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile> SI-8111c91d373a78<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile> SI-81112c770ae31a<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile> SI-7120, SI-8114, SI-71205876e8c621<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile> SI-7636, SI-6563255c51b3dd<notextile>SI-6563 Test case for already-fixed crasher</notextile> SI-8104, SI-8104c0cb1d891a<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile> SI-80857e85b59550<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile> SI-8085a12dd9c3b6<notextile>Test demonstrating SI-8085</notextile> SI-642647562e7adb<notextile>Revert "SI-6426, importable _."</notextile> SI-8062f0d913b51d<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile> SI-8157ca05d22006<notextile>SI-8157 Make overloading, defaults restriction PolyType aware</notextile> SI-6253034f6b9452<notextile>SI-6253 HashSet should implement union</notextile> SI-5604, SI-5604841dbc9c8c<notextile>removing defensive code made obsolete by existing fix to SI-5604</notextile> SI-6089, SI-7749c4e37d6521<notextile>overzealous assert in GenBCode</notextile> SI-8126, SI-733594e05a8501<notextile>SI-8126 Puts SI-7335 fix behind a source level flag</notextile> SI-8126, SI-68996dd3653b9c<notextile>SI-8126 Puts SI-6899 fix under a source level flag</notextile> SI-8126d43618a92c<notextile>SI-8126 Add a '-Xsource' flag allowing compilation in e.g. 2.10 mode</notextile> SI-4370994de8ffd1<notextile>SI-4370 Range bug: Wrong result for Long.MinValue to Long.MaxValue by Int.MaxVal</notextile> SI-8148973c7066b8<notextile>SI-8148 fix anonymous functions with placeholders</notextile> SI-6196, SI-620047a91d76fc<notextile>SI-6200 - HashMap should implement filter</notextile> SI-6196afcfba02ed<notextile>SI-6196 - Set should implement filter</notextile> SI-7544af75be6034<notextile>SI-7544 StringContext.f docs update</notextile> SI-6457bfa70315d7<notextile>SI-6457 ImmutableSetFactory.empty results in StackOverflowError</notextile> SI-6153, SI-6173, SI-6456, SI-6699, SI-811629541ce396<notextile>Quasi-comprehensive BigDecimal soundness/correctness fix.</notextile> SI-81002477bbd9d6<notextile>SI-8100 - prevent possible SOE during Stream#flatten.</notextile> SI-7469765ac94c2b<notextile>SI-7469 Remove misc. @deprecated elements</notextile> SI-8015f606d8176e<notextile>SI-8015 Refactor per code review</notextile> SI-80152c8a8ff6ba<notextile>SI-8015 Carat => Caret</notextile> SI-80158be560a1cf<notextile>SI-8015 Unprintables in messages</notextile> SI-8015bb2e99a692<notextile>SI-8015 Count lines by EOLs</notextile> SI-8035c5567e2700<notextile>SI-8035 Deprecate automatic () insertion in argument lists</notextile> SI-81072fe767806b<notextile>SI-8107: Use Regex.quote</notextile> SI-8107780ceca6a3<notextile>SI-8107: Add Regex.quote</notextile> SI-8081b8a76f688c<notextile>SI-8081 unzip/unzip3 return wrong static type when applied to Arrays</notextile> SI-81328642a50da8<notextile>SI-8132 Fix false "overrides nothing" for case class protected param</notextile> SI-732624a227d23d<notextile>Implements specialized subsetOf for HashSet</notextile> SI-8146a09e143b7f<notextile>SI-8146 Fix non-deterministic <:< for deeply nested types</notextile> SI-81462e28cf7f76<notextile>SI-8146 Test cases for typechecking decidability</notextile> SI-81468beeef339a<notextile>SI-8146 Pending test, diagnosis for bug in decidability of <:<</notextile> SI-81283e9e2c65a6<notextile>SI-8128 Fix regression in extractors returning existentials</notextile> SI-8045, SI-80451696145f76<notextile>SI-8045 type inference of extracted value</notextile> SI-7850def46a9d44<notextile>SI-7850 CCE in patmat with invalid isEmpty.</notextile> SI-6111, SI-6675, SI-7897, SI-667511bfa25e37<notextile>SI-7897, SI-6675 improves name-based patmat</notextile> SI-66158dd69ecfa7<notextile>SI-6615 junit test</notextile> SI-8058a90f39cdb5<notextile>SI-8058 Better support for enum trees</notextile> SI-484177a66d3525<notextile>SI-4841 CLI help update for -Xplugin</notextile> SI-80466f42bd6881<notextile>SI-8046 Only use fast TypeRef#baseTypeSeq with concrete base types</notextile> SI-61610de991ffea<notextile>Pending test for SI-6161</notextile> SI-8046edc9edb79b<notextile>SI-8046 Fix baseTypeSeq in presence of type aliases</notextile> SI-206628d3390e07<notextile>SI-2066 Plug a soundness hole higher order type params, overriding</notextile> SI-6615ad594604ed<notextile>SI-6615 PagedSeq's slice throws a NPE if it starts on a page that hasn't been co</notextile> SI-6364973f69ac75<notextile>SI-6364 SetWrapper does not preserve performance / behavior</notextile> SI-7680cb0d2854e1<notextile>SI-7680 Update the ScalaDoc entry page of the Scala library</notextile> SI-812900e11ffdd4<notextile>SI-8129 Plug a leak in perRunCaches</notextile> SI-8131, SI-81311d908106cf<notextile>SI-8131 Move test for reflection thread safety to pending.</notextile> SI-81353b68163e47<notextile>SI-8135 Disabled flaky hyperlinking presentation compiler test</notextile> SI-74434b6a0a999e<notextile>SI-7443 Use typeclass instance for {Range,NumericRange}.sum</notextile> SI-68126e4c926b4a<notextile>Use macro expandee, rather than expansion, in pres. compiler</notextile> SI-8064d744921f85<notextile>SI-8064 Automatic position repair for macro expansion</notextile> SI-79742e7c7347b9<notextile>SI-7974 Clean up and test 'Symbol-handling code in CleanUp</notextile> SI-79745e1e472fa1<notextile>SI-7974 Avoid calling nonPrivateMember after erasure</notextile> SI-48274936c43c13<notextile>SI-4827 Corrected positions assigned to constructor's default arg</notextile> SI-4827bdb0ac0fe5<notextile>SI-4827 Test to demonstrate wrong position of constructor default arg</notextile> SI-4287, SI-4287, SI-42877f4720c5db<notextile>SI-4287 Added test demonstrating hyperlinking to constructor's argument</notextile> SI-7491906e517135<notextile>SI-7491 deprecate overriding App.main and clarify documentation</notextile> SI-78597f16e4d1c5<notextile>SI-7859 fix AnyVal.scala scaladoc.</notextile> SI-7492bbe963873d<notextile>SI-7492 Make scala.runtime.MethodCache private[scala]</notextile> SI-81205b9966d077<notextile>SI-8120 Avoid tree sharing when typechecking patmat anon functions</notextile> SI-8102, SI-8102b46d7aefd6<notextile>SI-8102 -0.0.abs must equal 0.0</notextile> SI-7837feebc7131c<notextile>SI-7837 quickSort, along with Ordering[K], may result in stackoverflow because t</notextile> SI-7880d2ee92f055<notextile>SI-7880 Fix infinite loop in ResizableArray#ensureSize</notextile> SI-8052ea8ae48c18<notextile>SI-8052 Disallow macro as an identifier</notextile> SI-8047b97d44b2d8<notextile>SI-8047 change fresh name encoding to avoid owner corruption</notextile> SI-740672cd50c11b<notextile>SI-7406 crasher with specialized lazy val</notextile> SI-8091bce97058c4<notextile>makes boxity of fast track macros configurable</notextile> SI-8006d92effc8a9<notextile>SI-8006 prevents infinite applyDynamicNamed desugarings</notextile> SI-7777bbd03b26f1<notextile>SI-7777 applyDynamic macro fails for nested application</notextile> SI-8104, SI-81044b9e8e3417<notextile>codifies the state of the art wrt SI-8104</notextile> SI-6355, SI-6355, SI-7059431e19f9f1<notextile>SI-6355 SI-7059 it is possible to overload applyDynamic</notextile> SI-61209b2ce26887<notextile>SI-6120 Suppress extra warnings</notextile> SI-80176a4947c45c<notextile>SI-8017 Value class awareness for -Ydelamdafy:method</notextile> SI-62313b8b24a48b<notextile>Remove obsolete diagnostic error for SI-6231</notextile> SI-7012, SI-6231, SI-2897, SI-5508cca4d51dbf<notextile>SI-5508 Fix crasher with private[this] in nested traits</notextile> SI-7971f7f80e8b27<notextile>SI-7971 Handle static field initializers correctly</notextile> SI-7546a3a5e4a6f5<notextile>SI-7546 Use likely monotonic clock source for durations</notextile> SI-8042a5fc6e69e0<notextile>SI-8042 Use Serialization Proxy Pattern in List</notextile> SI-76186688da4fb3<notextile>SI-7618 Remove octal number literals</notextile> SI-8030760df9843a<notextile>SI-8030 force symbols on presentation compiler initialization</notextile> SI-8059f0f0a5e781<notextile>SI-8059 Override immutable.Queue#{+:,:+} for performance</notextile> SI-8024b2b9cf4f8c<notextile>SI-8024 Improve user-level toString of package objects</notextile> SI-8024e6cee26275<notextile>SI-8024 Fix inaccurate message on overloaded ambiguous ident</notextile> SI-8024a443bae839<notextile>SI-8024 Pending test case for package object / overloading bug</notextile> SI-6780110fde017e<notextile>SI-6780 Refactor Context#implicitss</notextile> SI-67800304e00168<notextile>SI-6780 Better handling of cycles in in-scope implicit search</notextile> SI-7912006e2f2aad<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile> SI-8060bb427a3416<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile> SI-79955ed834e251<notextile>SI-7995 completion imported vars and vals</notextile> SI-8019c955cf4c2e<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile> SI-8029fdcc262070<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile> SI-74398d74fa0242<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile> SI-80109036f774bc<notextile>SI-8010 Fix regression in erasure double definition checks</notextile> SI-805085692fffdd<notextile>SI-8050 [Avian] Skip instrumented tests</notextile> SI-802730f779b4d9<notextile>SI-8027 REPL double tab regression</notextile> SI-48411d30ea8669<notextile>SI-4841 Plugins get a class path</notextile> SI-7928, SI-8054369f370b1e<notextile>SI-8054 Fix regression in TypeRef rebind with val overriding object</notextile> SI-7789e6eed418ee<notextile>SI-7789 make quasiquotes deconstruct UnApply trees</notextile> SI-7980, SI-79964c899ea34c<notextile>Refactor Holes and Reifiers slices of Quasiquotes cake</notextile> SI-797926a3348271<notextile>SI-7979 Fix quasiquotes crash on mismatch between fields and constructor</notextile> SI-68420ccd4bcac6<notextile>SI-6842 Make splicing less sensitive to precise types of trees</notextile> SI-80092695924907<notextile>SI-8009 Ensure that Idents preserve isBackquoted property</notextile> SI-8016207b945353<notextile>SI-8016 Ensure that q‚Äù..$xs‚Äù is equivalent to q‚Äù{..$xs}‚Äù</notextile> SI-80088bde124040<notextile>SI-8008 Make q‚Äùf(..$xs)‚Äù only match trees with Apply node</notextile>
SI-80131b454185c4<notextile>SI-8013 Nowarn on macro str interpolation</notextile>
SI-79827d4109486b<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
SI-691370634395a4<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
SI-745802308c9691<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
SI-7548652b3b4b9d<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
SI-7548b7509c922f<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
SI-801403bf97e089<notextile>Fixes SI-8014, regression in Vector ++ TraversableOnce.</notextile>
SI-73731071d0ca86<notextile>SI-7373 Make the constructor of Vector non-public</notextile>
SI-7756, SI-8023a89000be9f<notextile>SI-8023 Fix symbol-completion-order type var pattern bug</notextile>
SI-6406, SI-7737, SI-802232b756494e<notextile>SI-8022 Backwards compatibility for Regex#unapplySeq</notextile>
SI-80053629b645cc<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
SI-8004696545d53f<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
SI-7463, SI-8003b915f440eb<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
SI-7280053a2744c6<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
SI-791504df2e48e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
SI-800228bf4ada31<notextile>SI-8002 private access for local companions</notextile>
SI-4332f12bb7bda4<notextile>SI-4332 Plugs the gaps in views</notextile>
SI-79840271a4a394<notextile>SI-7984 Issue unchecked warning for type aliases</notextile>
SI-801105620ad4e1<notextile>SI-8011 Test case for recently fixed value class bug</notextile>
SI-79698f20fa23db<notextile>SI-7969 REPL variable columnar output</notextile>
SI-796902359a09eb<notextile>SI-7969 Refactor to trait with test</notextile>
SI-796928cfe16fdd<notextile>SI-7969 REPL -C columnar output</notextile>
SI-7872518635385a<notextile>SI-7872 Plug a variance exploit in refinement types</notextile>
SI-800166577fa6ec<notextile>SI-8001 spurious "pure expression does nothing" warning</notextile>
SI-7967a5e24768f2<notextile>SI-7967 Account for type aliases in self-type checks</notextile>
SI-799964603653f8<notextile>SI-7999 s.u.c.NonFatal: StackOverflowError is fatal</notextile>
SI-7983dfe0ba847e<notextile>SI-7983 Fix regression in implicit divergence checking</notextile>
SI-79851050745dca<notextile>SI-7985 Refactor parsing of pattern type args</notextile>
SI-7985b1d305388d<notextile>SI-7985 Allow projection of lower-cased prefix as pattern type arg</notextile>
SI-798577ecff775e<notextile>SI-7985 Allow qualified type argument in patterns</notextile>
SI-7221d6a457cdc9<notextile>SI-7221 rewrites pollForWork non-recursively</notextile>
SI-6329, SI-6329b27c9b84be<notextile>SI-6329 Graduation day for pending tests for tag materialization</notextile>
SI-7944, SI-79875eef542ae4<notextile>SI-7987 Test case for "macro not expanded" error with implicits</notextile>
SI-72800f9c1e7a9a<notextile>SI-7280 Remove unneccesary method</notextile>

#### Complete commit list!

shaTitle
1baf11a2bb<notextile>SI-8143 Fix bug with super-accessors / dependent types</notextile>
9df2dcc584<notextile>[nomaster] SI-8152 Backport variance validator performance fix</notextile>
c91d373a78<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile>
2c770ae31a<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile>
5876e8c621<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile>
bd4adf5c97<notextile>More clear implicitNotFound error for ExecutionContext</notextile>
255c51b3dd<notextile>SI-6563 Test case for already-fixed crasher</notextile>
c0cb1d891a<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile>
7e85b59550<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile>
a12dd9c3b6<notextile>Test demonstrating SI-8085</notextile>
3fa2c97853<notextile>Report error on code size overflow, log method name.</notextile>
2aa9da578e<notextile>Partially revert f8d8f7d08d.</notextile>
47562e7adb<notextile>Revert "SI-6426, importable _."</notextile>
f0d913b51d<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
c258ccc9b5<notextile>Don't trace the low-level details of ResetAttrs under -Ydebug</notextile>
b7b210db14<notextile>Avoid cycles in Symbol toString under -Ydebug</notextile>
06bae51b07<notextile>Problem with EOL in tests for Printers is fixed</notextile>
99a75c0a91<notextile>Fix typo</notextile>
03e9e95f57<notextile>Test edge cases of literal lifting</notextile>
6283c01462<notextile>Give better names to UnliftHelper1 and UnliftHelper2</notextile>
ae4a2f0f7b<notextile>Lift Some, None, Nil, Left, Right not just supertypes</notextile>
722c743331<notextile>Remove redundant asInstanceOf for liftable</notextile>
ca05d22006<notextile>SI-8157 Make overloading, defaults restriction PolyType aware</notextile>
a1c00ae4b2<notextile>Dotless type application for infix operators.</notextile>
6f4dfb4c85<notextile>deprecates c.enclosingTree-style APIs</notextile>
034f6b9452<notextile>SI-6253 HashSet should implement union</notextile>
f9cbcbdaf8<notextile>overzealous assert in BCodeBodyBuilder rejected throw null</notextile>
841dbc9c8c<notextile>removing defensive code made obsolete by existing fix to SI-5604</notextile>
c4e37d6521<notextile>overzealous assert in GenBCode</notextile>
f1ca1a3823<notextile>removing dead code in BCodeSyncAndTry</notextile>
6eed8d00a5<notextile>there's a reason for this code in GenBCode</notextile>
7ee1a8321e<notextile>GenBCode version of "not eliminate loadmodule on static methods."</notextile>
7d1e8aa74d<notextile>GenBCode version of "Updating Position call sites" commit</notextile>
94e05a8501<notextile>SI-8126 Puts SI-7335 fix behind a source level flag</notextile>
6dd3653b9c<notextile>SI-8126 Puts SI-6899 fix under a source level flag</notextile>
d43618a92c<notextile>SI-8126 Add a '-Xsource' flag allowing compilation in e.g. 2.10 mode</notextile>
994de8ffd1<notextile>SI-4370 Range bug: Wrong result for Long.MinValue to Long.MaxValue by Int.MaxVal</notextile>
973c7066b8<notextile>SI-8148 fix anonymous functions with placeholders</notextile>
9c5e7f3893<notextile>Repairs unexpected failure of test t6200.scala</notextile>
47a91d76fc<notextile>SI-6200 - HashMap should implement filter</notextile>
afcfba02ed<notextile>SI-6196 - Set should implement filter</notextile>
af75be6034<notextile>SI-7544 StringContext.f docs update</notextile>
bfa70315d7<notextile>SI-6457 ImmutableSetFactory.empty results in StackOverflowError</notextile>
29541ce396<notextile>Quasi-comprehensive BigDecimal soundness/correctness fix.</notextile>
2477bbd9d6<notextile>SI-8100 - prevent possible SOE during Stream#flatten.</notextile>
765ac94c2b<notextile>SI-7469 Remove misc. @deprecated elements</notextile>
f606d8176e<notextile>SI-8015 Refactor per code review</notextile>
2c8a8ff6ba<notextile>SI-8015 Carat => Caret</notextile>
8be560a1cf<notextile>SI-8015 Unprintables in messages</notextile>
bb2e99a692<notextile>SI-8015 Count lines by EOLs</notextile>
c5567e2700<notextile>SI-8035 Deprecate automatic () insertion in argument lists</notextile>
2fe767806b<notextile>SI-8107: Use Regex.quote</notextile>
b8a76f688c<notextile>SI-8081 unzip/unzip3 return wrong static type when applied to Arrays</notextile>
d680d23947<notextile>toCode renamed to showCode</notextile>
3989227e45<notextile>Code cleanup based on pull request comments</notextile>
68ba3efba9<notextile>Annotated trees processing is modified</notextile>
2357e5dace<notextile>Printers code refactoring and cleanup</notextile>
0754abb566<notextile>Tests for ParsedTreePrinter</notextile>
0ac5c56837<notextile>toCode is added to Printers</notextile>
6536256f0e<notextile>val showOuterTests is removed</notextile>
64c9122aa3<notextile>Variance annotations printing</notextile>
8642a50da8<notextile>SI-8132 Fix false "overrides nothing" for case class protected param</notextile>
b33740f0b4<notextile>Improved documentation of HashTrieSet internals</notextile>
24a227d23d<notextile>Implements specialized subsetOf for HashSet</notextile>
a09e143b7f<notextile>SI-8146 Fix non-deterministic <:< for deeply nested types</notextile>
2e28cf7f76<notextile>SI-8146 Test cases for typechecking decidability</notextile>
8beeef339a<notextile>SI-8146 Pending test, diagnosis for bug in decidability of <:<</notextile>
65a2a417d8<notextile>Removes TODO comments that are no longer applicable</notextile>
b2f67b5730<notextile>removes Scala reflection-based macro runtime</notextile>
3a689f5c42<notextile>changes bundles to be classes, not traits extending Macro</notextile>
5cc8f83c68<notextile>*boxContext => *box.Context , *boxMacro => *box.Macro</notextile>
10f58e9d6a<notextile>Fix infinite recursion in name-based patmat.</notextile>
3e9e2c65a6<notextile>SI-8128 Fix regression in extractors returning existentials</notextile>
969a269033<notextile>Finalized some case classes, for better static checking.</notextile>
e0a3702f8a<notextile>Eliminated some dead/redundant code based on review.</notextile>
1696145f76<notextile>SI-8045 type inference of extracted value</notextile>
def46a9d44<notextile>SI-7850 CCE in patmat with invalid isEmpty.</notextile>
11bfa25e37<notextile>SI-7897, SI-6675 improves name-based patmat</notextile>
8dd69ecfa7<notextile>SI-6615 junit test</notextile>
a90f39cdb5<notextile>SI-8058 Better support for enum trees</notextile>
77a66d3525<notextile>SI-4841 CLI help update for -Xplugin</notextile>
6f42bd6881<notextile>SI-8046 Only use fast TypeRef#baseTypeSeq with concrete base types</notextile>
0de991ffea<notextile>Pending test for SI-6161</notextile>
edc9edb79b<notextile>SI-8046 Fix baseTypeSeq in presence of type aliases</notextile>
28d3390e07<notextile>SI-2066 Plug a soundness hole higher order type params, overriding</notextile>
ad594604ed<notextile>SI-6615 PagedSeq's slice throws a NPE if it starts on a page that hasn't been co</notextile>
973f69ac75<notextile>SI-6364 SetWrapper does not preserve performance / behavior</notextile>
cb0d2854e1<notextile>SI-7680 Update the ScalaDoc entry page of the Scala library</notextile>
505dc908dd<notextile>Fixes #3330 with Scaladoc changes only</notextile>
00e11ffdd4<notextile>SI-8129 Plug a leak in perRunCaches</notextile>
945f859475<notextile>fixes run/macroPlugins-namerHooks.scala</notextile>
1d908106cf<notextile>SI-8131 Move test for reflection thread safety to pending.</notextile>
3b68163e47<notextile>SI-8135 Disabled flaky hyperlinking presentation compiler test</notextile>
4b6a0a999e<notextile>SI-7443 Use typeclass instance for {Range,NumericRange}.sum</notextile>
a6f84efd87<notextile>Update man pages for scala and scalac.</notextile>
60c7427d2f<notextile>License formatting tweak, RTF version.</notextile>
4a4454b8f9<notextile>Explicit jline dependency.</notextile>
c1c368bb2c<notextile>Always copy man/* and doc/tools/*.</notextile>
c1ef1527f9<notextile>Fix typo in scala-library-all-pom.xml.</notextile>
50e7f2ba49<notextile>scala-library-all: dependency for those who want it all</notextile>
0dde1ae27f<notextile>scala-dist: all you need to roll your own scala distribution</notextile>
94ca91dd5f<notextile>Prepare maven-based distribution building.</notextile>
846d8d1195<notextile>Remove spurious resurrection of src/swing.</notextile>
c926974c30<notextile>Remove the unused test.continuations.suite.</notextile>
f5e35ecf81<notextile>Remove temporary binary compat scaffolding from AbstractPartiionFun.</notextile>
94eb751d00<notextile>Removes unnecessary generality in the macro engine</notextile>
6e4c926b4a<notextile>Use macro expandee, rather than expansion, in pres. compiler</notextile>
d744921f85<notextile>SI-8064 Automatic position repair for macro expansion</notextile>
d6b4cda628<notextile>Test to show the bug with hyperlinking in macro arguments</notextile>
7e0eee211f<notextile>More robust hyperlink tests for the presentation compiler</notextile>
db6e3062c1<notextile>ExistentialTypeTree.whereClauses are now MemberDefs</notextile>
9ce25045dd<notextile>Fix typo in documentation</notextile>
2e7c7347b9<notextile>SI-7974 Clean up and test 'Symbol-handling code in CleanUp</notextile>
5e1e472fa1<notextile>SI-7974 Avoid calling nonPrivateMember after erasure</notextile>
4936c43c13<notextile>SI-4827 Corrected positions assigned to constructor's default arg</notextile>
bdb0ac0fe5<notextile>SI-4827 Test to demonstrate wrong position of constructor default arg</notextile>
7f4720c5db<notextile>SI-4287 Added test demonstrating hyperlinking to constructor's argument</notextile>
ccacb06c49<notextile>Presentation compiler hyperlinking on context bounds test</notextile>
906e517135<notextile>SI-7491 deprecate overriding App.main and clarify documentation</notextile>
7f16e4d1c5<notextile>SI-7859 fix AnyVal.scala scaladoc.</notextile>
87913661e1<notextile>hooks for naming and synthesis in Namers.scala and Typers.scala</notextile>
4d92aec651<notextile>unprivates important helpers in Namers.scala</notextile>
6c7b003003<notextile>manifests that Namers.mkTypeCompleter is flag-agnostic</notextile>
0019bc2c4b<notextile>humane reporting of macro impl binding version errors</notextile>
68b8e23585<notextile>hooks for typecheck and expansion of macro defs</notextile>
279e2e3b50<notextile>unprivates important helpers in Macros.scala</notextile>
447e737174<notextile>removes some copy/paste from AnalyzerPlugins</notextile>
9e14058dd2<notextile>gives a more specific signature to computeMacroDefType</notextile>
9737b808c1<notextile>macroExpandApply => macroExpand</notextile>
bbe963873d<notextile>SI-7492 Make scala.runtime.MethodCache private[scala]</notextile>
5b9966d077<notextile>SI-8120 Avoid tree sharing when typechecking patmat anon functions</notextile>
b46d7aefd6<notextile>SI-8102 -0.0.abs must equal 0.0</notextile>
5cc01766a6<notextile>Improved testing framework for sets and maps.</notextile>
feebc7131c<notextile>SI-7837 quickSort, along with Ordering[K], may result in stackoverflow because t</notextile>
5f08c78ccd<notextile>untyper is no more</notextile>
59cdd50fa8<notextile>awakens default getter synthesis from the untyper nightmare</notextile>
dafcbeb344<notextile>Fix typos in documentation</notextile>
d2ee92f055<notextile>SI-7880 Fix infinite loop in ResizableArray#ensureSize</notextile>
ea8ae48c18<notextile>SI-8052 Disallow macro as an identifier</notextile>
71a2102a2d<notextile>Use t- prefix instead of si- prefix for test files</notextile>
b97d44b2d8<notextile>SI-8047 change fresh name encoding to avoid owner corruption</notextile>
f417380637<notextile>typeCheck => typecheck</notextile>
c728ff3866<notextile>fix Stream#flatten example</notextile>
72cd50c11b<notextile>SI-7406 crasher with specialized lazy val</notextile>
bce97058c4<notextile>makes boxity of fast track macros configurable</notextile>
49239833f5<notextile>Added .ant-targets-build.xml to .gitignore.</notextile>
29037f5465<notextile>Remove commented out code from HashSet and HashMap</notextile>
08a5e03280<notextile>makes well-known packages and package classes consistent with each other</notextile>
187d73ed1b<notextile>duplicates arguments to macro typer APIs</notextile>
05eacadf41<notextile>Invalidate <uptodate> checks on edits to build-ant-macros.xml</notextile>
b79ee63dae<notextile>Fix Ant uptodate checking in OSGI JAR creation</notextile>
d92effc8a9<notextile>SI-8006 prevents infinite applyDynamicNamed desugarings</notextile>
bbd03b26f1<notextile>SI-7777 applyDynamic macro fails for nested application</notextile>
4b9e8e3417<notextile>codifies the state of the art wrt SI-8104</notextile>
431e19f9f1<notextile>SI-6355 SI-7059 it is possible to overload applyDynamic</notextile>
3ef5837be5<notextile>cosmetic changes to liftables</notextile>
9b2ce26887<notextile>SI-6120 Suppress extra warnings</notextile>
6a4947c45c<notextile>SI-8017 Value class awareness for -Ydelamdafy:method</notextile>
3b8b24a48b<notextile>Remove obsolete diagnostic error for SI-6231</notextile>
cca4d51dbf<notextile>SI-5508 Fix crasher with private[this] in nested traits</notextile>
b275c38c94<notextile>duplicates macro arguments before expansion</notextile>
f7f80e8b27<notextile>SI-7971 Handle static field initializers correctly</notextile>
ca2dbe55eb<notextile>drops the redundant typecheck of blackbox expansions</notextile>
a3b33419b0<notextile>whitebox macros are now first typechecked against outerPt</notextile>
bd615c62ac<notextile>refactors macroExpandApply</notextile>
e3cedb7e84<notextile>Improvements to partest-ack, plus partest-paths.</notextile>
d00ad5abe8<notextile>Fix osgi bundle name for continuations.</notextile>
30b389a9b0<notextile>Modularize the swing library.</notextile>
858a5d5137<notextile>Modularize continuations plugin.</notextile>
a3a5e4a6f5<notextile>SI-7546 Use likely monotonic clock source for durations</notextile>
d68bbe4b83<notextile>Fixup for #3265</notextile>
a5fc6e69e0<notextile>SI-8042 Use Serialization Proxy Pattern in List</notextile>
7db59bd998<notextile>fix typo in error messages</notextile>
6688da4fb3<notextile>SI-7618 Remove octal number literals</notextile>
760df9843a<notextile>SI-8030 force symbols on presentation compiler initialization</notextile>
f0f0a5e781<notextile>SI-8059 Override immutable.Queue#{+:,:+} for performance</notextile>
c4e1b032d9<notextile>Test case for recently improved unchecked warning</notextile>
b2b9cf4f8c<notextile>SI-8024 Improve user-level toString of package objects</notextile>
e6cee26275<notextile>SI-8024 Fix inaccurate message on overloaded ambiguous ident</notextile>
a443bae839<notextile>SI-8024 Pending test case for package object / overloading bug</notextile>
110fde017e<notextile>SI-6780 Refactor Context#implicitss</notextile>
0304e00168<notextile>SI-6780 Better handling of cycles in in-scope implicit search</notextile>
9cdbe28c00<notextile>Fixup #3248 missed a spot in pack.xml</notextile>
006e2f2aad<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile>
bb427a3416<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
e555106070<notextile>Remove docs/examples; they reside at scala/scala-dist</notextile>
dc6dd58d9d<notextile>Remove unused android test and corresponding license.</notextile>
f8d8f7d08d<notextile>Do not distribute partest and its dependencies.</notextile>
5ed834e251<notextile>SI-7995 completion imported vars and vals</notextile>
c955cf4c2e<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
fdcc262070<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
8d74fa0242<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile>
9036f774bc<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
3faa2eedd8<notextile>[nomaster] better error messages for various macro definition errors</notextile>
85692fffdd<notextile>SI-8050 [Avian] Skip instrumented tests</notextile>
30f779b4d9<notextile>SI-8027 REPL double tab regression</notextile>
1d30ea8669<notextile>SI-4841 Plugins get a class path</notextile>
369f370b1e<notextile>SI-8054 Fix regression in TypeRef rebind with val overriding object</notextile>
495b7b873b<notextile>Address minor pull request feedback points</notextile>
a09914ca9f<notextile>Test possible quasiquote runtime failures</notextile>
b9a900e5d2<notextile>Test usage of SubpatternsAttachment from a macro</notextile>
c9cd5eeb01<notextile>Test tuple lifting and unlifting</notextile>
e6eed418ee<notextile>SI-7789 make quasiquotes deconstruct UnApply trees</notextile>
1188f95acf<notextile>Introduce support for Unliftable for Quasiquotes</notextile>
4c899ea34c<notextile>Refactor Holes and Reifiers slices of Quasiquotes cake</notextile>
4be6ea147a<notextile>Provide a way for unapply macro to obtain a list of subpattens</notextile>
f3c260bf89<notextile>Move Liftable into the Universe cake; add additional standard Liftables</notextile>
26a3348271<notextile>SI-7979 Fix quasiquotes crash on mismatch between fields and constructor</notextile>
0ccd4bcac6<notextile>SI-6842 Make splicing less sensitive to precise types of trees</notextile>
2695924907<notextile>SI-8009 Ensure that Idents preserve isBackquoted property</notextile>
207b945353<notextile>SI-8016 Ensure that q‚Äù..$xs‚Äù is equivalent to q‚Äù{..$xs}‚Äù</notextile>
8bde124040<notextile>SI-8008 Make q‚Äùf(..$xs)‚Äù only match trees with Apply node</notextile> eb78e90ca7<notextile>streamlines refchecking undesired symbol properties</notextile> 87979ad96f<notextile>deprecates macro def return type inference</notextile> 58eadc0952<notextile>add method dequeueOption to immutable.Queue</notextile> 1b454185c4<notextile>SI-8013 Nowarn on macro str interpolation</notextile> 5ba6e13b9e<notextile>undeprecates c.parse</notextile> 7d4109486b<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile> 70634395a4<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile> 02308c9691<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile> 652b3b4b9d<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile> b7509c922f<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile> 0c963c9085<notextile>[nomaster] teaches toolbox about -Yrangepos</notextile> 51cd47491e<notextile>Removes Gen*View and Par*View</notextile> 2ce7b1269a<notextile>Deprecates Par*View and Gen*View</notextile> 3d804859d7<notextile>Use -Dupdate.versions to update versions.properties</notextile> 1d3ec4e708<notextile>better error messages for various macro definition errors</notextile> 03bf97e089<notextile>Fixes SI-8014, regression in Vector ++ TraversableOnce.</notextile> e571c9cc3e<notextile>Better error messages for common Function/Tuple mistakes</notextile> 1071d0ca86<notextile>SI-7373 Make the constructor of Vector non-public</notextile> d0aaa86a9f<notextile>SI-8023 Address review comments around typedHigherKindedType</notextile> a89000be9f<notextile>SI-8023 Fix symbol-completion-order type var pattern bug</notextile> 32b756494e<notextile>SI-8022 Backwards compatibility for Regex#unapplySeq</notextile> 158c76ada5<notextile>Remove unused android tests.</notextile> 38e2d6ebd1<notextile>Rename build-support.xml to build-ant-macros.xml.</notextile> 7742a7d909<notextile>No longer support unreleased STARR.</notextile> 23f52a8aad<notextile>Move all macros in build.xml to build-support.xml.</notextile> 3629b645cc<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile> 696545d53f<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile> b915f440eb<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile> 053a2744c6<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile> eb9f0f7975<notextile>[nomaster] Adds test cases for scope completion</notextile> 3a8796da1a<notextile>[nomaster] Test infrastructure for scope completion</notextile> 04df2e48e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile> 28bf4ada31<notextile>SI-8002 private access for local companions</notextile> f12bb7bda4<notextile>SI-4332 Plugs the gaps in views</notextile> 0271a4a394<notextile>SI-7984 Issue unchecked warning for type aliases</notextile> 05620ad4e1<notextile>SI-8011 Test case for recently fixed value class bug</notextile> 8f20fa23db<notextile>SI-7969 REPL variable columnar output</notextile> 02359a09eb<notextile>SI-7969 Refactor to trait with test</notextile> 28cfe16fdd<notextile>SI-7969 REPL -C columnar output</notextile> 518635385a<notextile>SI-7872 Plug a variance exploit in refinement types</notextile> 66577fa6ec<notextile>SI-8001 spurious "pure expression does nothing" warning</notextile> a5e24768f2<notextile>SI-7967 Account for type aliases in self-type checks</notextile> 5d5596bb07<notextile>Special treatment for local symbols in TypeTreeMemberType</notextile> b5be392967<notextile>Refactor away duplication between -Ydelambdafy:{inline,method}</notextile> 736613ea8a<notextile>Substitute new parameter symbols into lambda body</notextile> cb37548ef8<notextile>Symbol substutition must consider ClassInfoType#parents</notextile> d7d63e93f3<notextile>Tidy up the Uncurry component of delambdafy</notextile> 342b05b849<notextile>Test in quick mode for ant build</notextile> 7c9b41fa11<notextile>Update Eclipse classpath files</notextile> 1d8e8ffa0f<notextile>Revise paragraph (a revised #3164)</notextile> ee6fbae3d0<notextile>correctly fails implicit search for invalid implicit macros</notextile> 64603653f8<notextile>SI-7999 s.u.c.NonFatal: StackOverflowError is fatal</notextile> 60ac821192<notextile>Account for a variation of package types in Implicit Divergence.</notextile> d8ffaac6ae<notextile>Code reformatting in Implicits</notextile> dfe0ba847e<notextile>SI-7983 Fix regression in implicit divergence checking</notextile> e7443e2d5b<notextile>2.11.0-M7 starr, 1.11.1 scalacheck, bump modules.</notextile> 1050745dca<notextile>SI-7985 Refactor parsing of pattern type args</notextile> b1d305388d<notextile>SI-7985 Allow projection of lower-cased prefix as pattern type arg</notextile> 77ecff775e<notextile>SI-7985 Allow qualified type argument in patterns</notextile> d6a457cdc9<notextile>SI-7221 rewrites pollForWork non-recursively</notextile> 34358ee1e8<notextile>more precise isMacroApplication check</notextile> 5344a0316e<notextile>Remove deprecated constructor from the migration annotation</notextile> d6ef83a2d7<notextile>use more specific cake dependencies</notextile> 1080da8076<notextile>refactor out fresh name prefix extraction logic</notextile> 2d4f0f1859<notextile>Removing deprecated code.</notextile> b004c3ddb3<notextile>deprecate Pair and Triple</notextile> b27c9b84be<notextile>SI-6329 Graduation day for pending tests for tag materialization</notextile> 5eef542ae4<notextile>SI-7987 Test case for "macro not expanded" error with implicits</notextile> 36d66c2134<notextile>deprecate scala.Responder</notextile> 33a086b97a<notextile>Handle TypeApply(fun, ...) for symbol-less funs</notextile> 733f7f0868<notextile>Prepare upgrade to scalacheck 1.11.</notextile> ec89b59717<notextile>Upgrade pax-url-aether to 1.6.0.</notextile> 0f9c1e7a9a<notextile>SI-7280 Remove unneccesary method</notextile> #### Scala 2.10.4-RC2 is now available! We are very happy to announce the second release candidate of Scala 2.10.4! If no serious blocking issues are found this will become the final 2.10.4 version. The release is available for download from scala-lang.org or from Maven Central. The Scala team and contributors fixed 23 issues since 2.10.3! In total, 39 RC1 pull requests and 12 RC2 pull requests were merged on GitHub. ### Known Issues Before reporting a bug, please have a look at these known issues. ### Scala IDE for Eclipse The Scala IDE with this release built right in is available through the following update-site: Have a look at the getting started guide for more info. ### New features in the 2.10 series Since 2.10.4 is strictly a bug-fix release, here’s an overview of the most prominent new features and improvements as introduced in 2.10.0: • Value Classes • Implicit Classes • String Interpolation • Futures and Promises • Dynamic and applyDynamic • Dependent method types: • def identity(x: AnyRef): x.type = x // the return type says we return exactly what we got • New ByteCode emitter based on ASM • Can target JDK 1.5, 1.6 and 1.7 • Emits 1.6 bytecode by default • Old 1.5 backend is deprecated • A new Pattern Matcher • rewritten from scratch to generate more robust code (no more exponential blow-up!) • code generation and analyses are now independent (the latter can be turned off with -Xno-patmat-analysis) • Scaladoc Improvements • Implicits (-implicits flag) • Diagrams (-diagrams flag, requires graphviz) • Groups (-groups) • Modularized Language features • Parallel Collections are now configurable with custom thread pools • Akka Actors now part of the distribution • scala.actors have been deprecated and the akka implementation is now included in the distribution. • See the actors migration project for more information. • Performance Improvements • Faster inliner • Range#sum is now O(1) • Update of ForkJoin library • Fixes in immutable TreeSet/TreeMap • Improvements to PartialFunctions • Addition of ??? and NotImplementedError • Addition of IsTraversableOnce + IsTraversableLike type classes for extension methods • Deprecations and cleanup • Floating point and octal literal syntax deprecation • Removed scala.dbc ### Experimental features The API is subject to (possibly major) changes in the 2.11.x series, but don’t let that stop you from experimenting with them! A lot of developers have already come up with very cool applications for them. Some examples can be seen at http://scalamacros.org/news/2012/11/05/status-update.html. #### A big thank you to all the contributors! #Author 21<notextile>Jason Zaugg</notextile> 15<notextile>Adriaan Moors</notextile> 4<notextile>Eugene Burmako</notextile> 3<notextile>Simon Schaefer</notextile> 3<notextile>Mirco Dotta</notextile> 3<notextile>Luc Bourlier</notextile> 2<notextile>Som Snytt</notextile> 2<notextile>Paul Phillips</notextile> 1<notextile>Mark Harrah</notextile> 1<notextile>Vlad Ureche</notextile> 1<notextile>James Ward</notextile> 1<notextile>Heather Miller</notextile> 1<notextile>Roberto Tyley</notextile> 1<notextile>François Garillot</notextile> #### Commits and the issues they fixed since v2.10.3 Issue(s)CommitMessage SI-8111c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile> SI-81112c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile> SI-7120, SI-8114, SI-71205876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile> SI-7636, SI-6563255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile> SI-8104, SI-8104c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile> SI-80857e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile> SI-8085a12dd9c<notextile>Test demonstrating SI-8085</notextile> SI-642647562e7<notextile>Revert "SI-6426, importable _."</notextile> SI-8062f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile> SI-7912006e2f2<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile> SI-8060bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile> SI-79955ed834e<notextile>SI-7995 completion imported vars and vals</notextile> SI-8019c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile> SI-8029fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile> SI-74398d74fa0<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile> SI-80109036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile> SI-79827d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile> SI-69137063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile> SI-745802308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile> SI-7548652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile> SI-7548b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile> SI-80053629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile> SI-8004696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile> SI-7463, SI-8003b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile> SI-7280053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile> SI-791504df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile> SI-7776d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile> SI-6546075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile> SI-7638, SI-4012e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile> SI-751950c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile> SI-7519ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile> SI-4936, SI-6026e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile> SI-60262bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile> SI-729525bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile> SI-70207b56021<notextile>Disable tests for SI-7020</notextile> SI-77832ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile> SI-7815733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile> #### Complete commit list! shaTitle c91d373<notextile>SI-8111 Expand the comment with a more detailed TODO</notextile> 2c770ae<notextile>SI-8111 Repair symbol owners after abandoned named-/default-args</notextile> 5876e8c<notextile>[nomaster] SI-8114 Binary compat. workaround for erasure bug SI-7120</notextile> bd4adf5<notextile>More clear implicitNotFound error for ExecutionContext</notextile> 255c51b<notextile>SI-6563 Test case for already-fixed crasher</notextile> c0cb1d8<notextile>[nomaster] codifies the state of the art wrt SI-8104</notextile> 7e85b59<notextile>SI-8085 Fix BrowserTraverser for package objects</notextile> a12dd9c<notextile>Test demonstrating SI-8085</notextile> 3fa2c97<notextile>Report error on code size overflow, log method name.</notextile> 2aa9da5<notextile>Partially revert f8d8f7d08d.</notextile> 47562e7<notextile>Revert "SI-6426, importable _."</notextile> f0d913b<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile> 9cdbe28<notextile>Fixup #3248 missed a spot in pack.xml</notextile> 006e2f2<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile> bb427a3<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile> 27a3860<notextile>Update README, include doc/licenses in distro</notextile> 139ba9d<notextile>Add attribution for Typesafe.</notextile> e555106<notextile>Remove docs/examples; they reside at scala/scala-dist</notextile> dc6dd58<notextile>Remove unused android test and corresponding license.</notextile> f8d8f7d<notextile>Do not distribute partest and its dependencies.</notextile> 5ed834e<notextile>SI-7995 completion imported vars and vals</notextile> c955cf4<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile> fdcc262<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile> 8d74fa0<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile> 9036f77<notextile>SI-8010 Fix regression in erasure double definition checks</notextile> 3faa2ee<notextile>[nomaster] better error messages for various macro definition errors</notextile> 7d41094<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile> 7063439<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile> 02308c9<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile> 652b3b4<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile> b7509c9<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile> 0c963c9<notextile>[nomaster] teaches toolbox about -Yrangepos</notextile> 3629b64<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile> 696545d<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile> b915f44<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile> 053a274<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile> eb9f0f7<notextile>[nomaster] Adds test cases for scope completion</notextile> 3a8796d<notextile>[nomaster] Test infrastructure for scope completion</notextile> 04df2e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile> ec89b59<notextile>Upgrade pax-url-aether to 1.6.0.</notextile> 1d29c0a<notextile>[backport] Add buildcharacter.properties to .gitignore.</notextile> 31ead67<notextile>IDE needs swing/actors/continuations</notextile> 852a947<notextile>Allow retrieving STARR from non-standard repo for PR validation</notextile> 40af1e0<notextile>Allow publishing only core (pr validation)</notextile> ba0718f<notextile>Render relevant properties to buildcharacter.properties</notextile> d15ed08<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile> 6045a05<notextile>Fix completion after application with implicit arguments</notextile> 075f6f2<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile> e09a8a2<notextile>SI-4012 Mixin and specialization work well</notextile> 50c8b39e<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile> ce74bb0<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile> e350bd2<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile> 2bfe0e7<notextile>SI-6026 REPL checks for javap before tools.jar</notextile> 25bcba5<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile> 7b56021<notextile>Disable tests for SI-7020</notextile> 8986ee4<notextile>Disable flaky presentation compiler test.</notextile> 2ccbfa5<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile> ee9138e<notextile>Bump version to 2.10.4 for nightlies</notextile> 733b322<notextile>SI-7815 Dealias before deeming method type as dependent</notextile> ### Functional Jobs #### Analytics Software Developer (Scala) at Rackspace (Full-time) overview Rackspace is building a team to develop products focused on statistically-grounded analysis of the massive datasets Rackspace collects, with an emphasis on iteratively learning actions the business can take to increase revenue. We are prototyping our first product with Scala, Spark, Cassandra, Kafka, and Pallet. However our effort is still young, and we have lots of room to pivot to appropriate technologies. reasons to apply • interesting work: You'll marry academic techniques with modern technologies at challenging scales. • statistical integrity: We are committed to not just harvest data, but also analyze it with statistical rigor. Towards this end, the team has both trained statisticians and software engineers working together. • visibility and impact: Your work will help integrate the concerns of multiple departments. • fresh start: This team and the work is new and relatively unencumbered, so your initiative can flourish. • dedication to quality: We will expect and provide room for continuous refactoring. We won't let our short-term goals compromise our long-term vision. • developer-sensitive/focused management: Management is highly technical and understands how to create an environment where the team can deliver high-quality work at a sustainable pace. • open-source friendly: Rackspace has a strong precedence for converting products and libraries to a combination of open source and SaaS/PaaS solutions. • flexibility: This team will have the flexibility of a startup with the support and stability of Rackspace's robust capital. This team is still small and currently based in Austin, Texas (which is a really great city!). We appreciate the team-building benefits of collocation. But we understand that more experienced engineers are harder to uproot and may prefer to work remotely. Just let us know where you stand and what your preferences are and perhaps we can work something out. what we're looking for • exceptional ability to craft generalized and modular abstractions to not only increase reuse, but also reduce risk • expertise with distributed systems (our data does not always fit into memory or a single node) • expertise with type-checked functional programming (types and FP are two of our architectural/engineering strategies to craft correctness and simplicity -- for instance, using scalaz to track effects with types or compose functionality with monad transformers) • the ability to understand the statistics and machine learning required by the domain • the openness to take your learning to an academic level • strategies to deliver value incrementally and transparently to establish trust with the business • strategies to design and grow the team • a productive mix of auto-didactic behaviors as well as good social skills to mentor and promote healthy discourse • the drive to make all aspects of the product high-quality, whether production code, tests, documentation, the build, documentation, etc. about Rackspace Rackspace® (NYSE: RAX) is the global leader in hybrid cloud and founder of OpenStack®, the open-source operating system for the cloud. Hundreds of thousands of customers look to Rackspace to deliver the best-fit infrastructure for their IT needs, leveraging a product portfolio that allows workloads to run where they perform best—whether on the public cloud, private cloud, dedicated servers, or a combination of platforms. The company's award-winning Fanatical Support® helps customers successfully architect, deploy, and run their most critical applications. Headquartered in San Antonio, TX, Rackspace operates data centers on four continents. Rackspace is featured on Fortune's list of 100 Best Companies to Work For. Equal Employment Opportunity Policy: Rackspace is committed to offering equal employment opportunity without regard to age, color, disability, gender, gender identity, genetic information, marital status, military status, national origin, race, religion, sexual orientation, veteran status, or any other legally protected characteristic. Get information on how to apply for this position. ## January 22, 2014 ### Ruminations of a Programmer #### A Sketch as the Query Model of an EventSourced System In my last post I discussed the count-min sketch data structure that can be used to process data streams using sub-linear space. In this post I will continue with some of my thoughts on how count-min sketches can be used in a typical event sourced application architecture. An event sourcing system typically has a query model which provides a read only view of how all the events are folded to provide a coherent view of the system. I have seen applications where the query model is typically rendered from a relational database. And the queries can take a lot of time to be successfully processed and displayed to the user if the data volume is huge. And when we are talking about Big Data, this is not a very uncommon use case. Instead of rendering the query from the RDBMS, quite a few types of them can be rendered from a count-min sketch using sub-linear space. Consider the use case where you need to report the highest occuring user-ids in a Twitter stream. The stream is continuous, huge and non ending and you get to see each item once. So you get each item from where you parse out the user-id occurring in it and update the sketch. So each entry of the sketch contains the frequency of the user-id that hashes to that slot. And we can take the minimum of all the slots to which a user-id hashes to, in order to get the frequency of that user-id. The details of how this works can be found in my last post. Consider the case where we need to find the heavy-hitters - those user-ids whose frequency exceeds a pre-determined threshold. For that, in addition to the sketch we can also maintain a data structure like heap or tree where we update the top-k heavy hitters. When a user-id appears, we update the sketch, get its estimated frequency from the sketch and if it exceeds the threshold, also record it in the data structure. So at any point in time we can probe this accessary data structure to get the current heavy-hitters. Spark examples contain a sample implementation of this heavy hitters query from a Twitter stream using the CountMinSketchMonoid of Algebird. Can this be a viable approach of implementing the query model in an event sourced system if the use case fits the approximation query approach ? It can be faster, relatively cheap in space and can prove to be responsive enough to be displayed in dashboards in the form of charts or graphs. ## January 21, 2014 ### scala-lang.org #### 10 Years of Scala The first release of Scala happened ten years ago on January 20th. Looking back I am stunned how we could have taken an experimental research language and turned it into a tool for everyday programming that’s used by hundreds of thousands of developers. This is even more surprising in that no big company or organization backed Scala. Instead it was a grassroots movement with many super smart and motivated contributors. They are far too numerous to be all listed here, but I nevertheless want to thank some of the contributors by name who influenced the trajectory of Scala in a crucial way. In particular, there were: • The early EPFL contributors around Matthias Zenger, Michel Schinz, Philippe Altherr. • The second wave of EPFL contributors, including Iulian Dragos, Philipp Haller, Lukas Rytz, Tiark Rompf, Stéphane Micheloud, Burak Emir, Vincent Cremet, Ingo Meier, Nikolay Mihaylov, Lex Spoon, Antonio Cunei, Sean McDermid, Erik Stenman. • Early users who told the world about it: John Pretty, Miles Sabin, David Pollak, Dick Wall, Bill Venners, David McIver, Josh Suereth, Jonas Bonér, Viktor Klang, James Iry, Daniel Sobral and many others. • Phil Bagwell, who designed our core collection structures, and was a great spokesperson for the community. • Paul Phillips, who put in amazing work over many years. • The many active open-source committers, including Simon Ochsenreither, Denys Shabalin, Pavel Pavlov, Dominik Gruntz, Rex Kerr. • The Typesafe Scala team: Adriaan Moors, Jason Zaugg, Greg Kossakowski. • The people working hard on giving us good tooling: Scala IDE, IntelliJ, NetBeans, SBT, Ensime. • Lalit Pant, for making Scala accessible to children and Shadaj Laddad for showing how much fun Scala can be. • The people who contributed to our massive open online courses: Heather Miller, Aleksandar Prokopec, Vojin Jovanovic, Lukas Rytz, Nada Amin, Tobias Schlatter, Roland Kuhn, Erik Meijer. • The other people who take Scala forward at EPFL: Hubert Plocinicak, Eugene Burmako, Manohar Jonalagedda, Vlad Ureche, Sandro Stucki, Miguel Garcia, Christopher Vogt. • The vibrant Scala.js team around Sébastien Doeraene and Haoyi Li. • The authors of all the Scala books. • The people writing great open-source libraries using Scala and contributing them back to the public. Drawing up this list, I am humbled by the amount of hard work people have put in to make Scala what it is. I am sure I have forgotten many others whose contributions were equally crucial. A big thank you to you all! Now, looking at the next ten years, I believe we have some truly exciting times ahead. I’ll write about some of the opportunities and challenges that I see in another post. ## January 20, 2014 ### Functional Jobs #### Senior Developer and Leader at TIM Group (Full-time) This is a chance for a passionate developer to help us bring new products to market and find creative ways to build and enhance systems that solve our financial clients’ needs. We expect a Senior Developer to mentor and lead other developers, as well as add to our technology teams skill set that ranges from deep technology knowledge to start-up experience to financial systems knowledge. Our Business Our systems help manage billion-dollar portfolios, feed complex quantitative financial models, and fuel the sales and trading desks of the world's largest investment banks. We give our teams the freedom to use the tools and languages that they need to solve the problems inherent in these systems. Due to our rapid business growth since launching our products in 2005, we have grown our development teams in our London and Boston offices. We offer a fun, challenging, and rewarding working environment and the opportunity to work with world-class talent to help you build a world-class career. Our Technologies Our systems are written primarily in Scala and Java on the server-side, and for our web-apps, we largely develop rich front-ends using Javascript and tools like Backbone.js and Bootstrap. We integrate into MySQL and MongoDB databases, as well as MQ services and our continuous delivery infrastructure. We contribute into the open source community, and you can browse our work here: https://github.com/youdevise. As well as hear our thoughts about this and everything else on our blog here: https://devblog.timgroup.com/ Our Process We enter new markets using a lean start-up approach based on minimum viable products. We believe discipline allows us to go faster. We use agile development techniques such as test-driven development, pair-programming, and continuous delivery. We actively seek to learn and improve through retrospectives, lightning talks, and various meet-ups that we host in our offices. We expect our developers to care about how to design software and systems that our clients will love, and will want to solve their problems quickly and iteratively seeking creative ways to build valued systems. Essential Attributes / Skills: • Solid and proven coding background in any domain / language • Use of and interest in open-source software • Willingness to work in an environment that reflects and adapts to our clients' needs and market demands • Excellent communication and inter-personal skills • Diverse technology knowledge and a sense of curiosity to explore new and better ways to solve problems Ideal Attributes: • Expertise in both Java and Scala languages, ecosystems, and programming paradigms • Expertise in building rich-client web front-ends using Javascript tools and libraries • Strong command of testing techniques like TDD and BDD • Experience in bringing new products to market • Contributions (e.g. bug fixes) to libraries and development tools that you have used • Experience / interest in the finance sector • "Hobby Project" of interest / open-source contributor Get information on how to apply for this position. #### Senior Scala Engineer at TIM Group (Full-time) TIM Group is a JVM shop with 3+ years experience building systems in Scala, and previous experience in Java. We're doing a pragmatic mix of functional and imperative programming, and are looking for engineers to help us 'improve our game' as we continue to explore the functional side of Scala. Even on the Java side of our stack we’ve made a conscious effort to use FP techniques where appropriate (e.g we’ve been using FunctionalJava for years); so this is a serious, informed decision - we value FP and would like to hire the right people to do more of it. The ideal candidate would be passionate about finding creative ways to build and enhance systems that solve our financial clients’ needs - we are open to developers with experience in any language, but have a very strong preference for Java/Scala and in particular are seeking engineers with an existing FP skill-set/interest. We’re looking for Senior Engineers to join our teams in Boston, MA and London, UK. Why Us • We’re serious about writing clean/elegant code and about embracing immutability and functional programming constructs where they are appropriate • We’re pragmatic - a val may usually be better than a var, but both have a place • We seek to continuously learn and improve through retrospectives/meet-ups/etc.: London Scala User Group Dojo: http://www.meetup.com/london-scala/ Lightning Talks: http://vimeo.com/user3637590/videos • We actively contribute to the community You can browse our open source work here: https://github.com/youdevise Or read our thoughts here: https://devblog.timgroup.com/ We send patches upstream and look to work with the community where possible: Some of our recent work on Slick/Play (iteratees for database queries): https://github.com/youdevise/scalaquery-play-iteratees We’ve shared patches to: https://github.com/functionaljava/functionaljava/ • We develop software using techniques such as TDD, pair-programming, and continuous delivery Essential Attributes/Skills: • Solid and proven coding background in any domain/language • Sound architectural skills and an interest in improving the design of systems • Excellent communication and interpersonal skills • A sense of curiosity to explore new and better ways to solve problems • A willingness to take a position and make an impact by improving our practice/methods/outcomes. Ideal Attributes: • A strong interest in functional programming - e.g at a minimum you should know what a Monad is and why you might use such a construct • Expertise in both Java and Scala languages, ecosystems, and programming paradigms • Strong command of testing techniques like TDD and BDD Contributions (e.g. bug fixes) to libraries and development tools that you have used • Experience/interest in the finance sector "Hobby Project" of interest / open-source contributor Get information on how to apply for this position. ## January 19, 2014 ### Ruminations of a Programmer #### Count-Min Sketch - A Data Structure for Stream Mining Applications In today's age of Big Data, streaming is one of the techniques for low latency computing. Besides the batch processing infrastructure of map/reduce paradigm, we are seeing a plethora of ways in which streaming data is processed at near real time to cater to some specific kinds of applications. Libraries like Storm, Samza and Spark belong to this genre and are starting to get their share of user base in the industry today. This post is not about Spark, Storm or Samza. It's about a data structure which is one of the relatively new entrants in the domain of stream processing, which is simple to implement, but has already proved to be of immense use in serving a certain class of queries over huge streams of data. I have been doing some readings about the application of such structures and thought of sharing them with the readers of my blog. # Using Sublinear Space Besides data processing, these tools also support data mining over streams that include serving specialized queries over data using limited space and time. Ok, so once we store all data as they come we can always serve queries with O(n) space. But since we are talking about huge data streams, it may not even be possible to run algorithms on the full set of data - it simply will be too expensive. Even if we have the entire set of data in a data warehouse, the processing of the entire data set may take time and consume resources that we cannot afford to have, considering the fee charged under the evolving models of using the platform-as-a-service within the cloud based infrastructure. Also the fact that these algorithms will be working on data streams, there's a high likelihood that they will get to see these data only in a single pass. The bottom line is that we need to have algorithms that work on sub-linear space. Working on sublinear space implies that we don't get to store or see all data - hence an obvious conclusion from this will be the fact that we also don't get to deliver an accurate answer to some queries. We rely on some approximation techniques and deliver an accuracy with a reasonably high probability bound. We don't store all data, instead we store a lossy compressed representation of the data and deliver user queries from this subset instead of the entire set. One widely used technique for storing a subset of data is through Random Sampling, where the data stored is selected through some stochastic mechanism. There are various ways to determine which data we select for storing and how we build the estimator for querying the data. There are pros and cons with this approach, but it's one of the simplest ways to do approximation based queries on streaming data. There are a few other options like Histograms and Wavelet based synopses. But one of the most interesting data structures that have been developed in recent times is the Sketch, which uses summary based techniques for delivering approximation queries, gets around the typical problems that sampling techniques have and are highly parallelizable in practice. An important class of sketch is one where the sketch vector (which is the summary information) is a linear transform of the input vector. So if we model the input as a vector we can multiply it by a sketch matrix to obtain the sketch vector that contains the synopses data that we can use for serving approximation queries. Here's a diagrammatic representation of the sketch as a linear transform of the input data. # Count-Min Sketch One of the most popular forms of the sketch data structure is the Count-Min Sketch introduced by Muthukrishnan and Cormode in 2003. The idea is quite simple and the data structure is based on probabilistic algorithms to serve various types of queries on streaming data. The data structure is parameterized by two factors - ε and δ, where the error in answering the query is within a factor of ε with probability δ. So you can tune these parameters based on the space that you can afford and accordingly amortize the accuracy of results that the data structure can serve you. Consider this situation where you have a stream of data (typically modeled as a vector) like updates to stock quotes in a financial processing system arriving continuously that you need to process and report statistical queries on a real time basis. • We model the data stream as a vector a[1 .. n] and the updates received at time t are of the form (it, ct) which mean that the stock quote for a[it] has been incremented by ct. There are various models in which this update can appear as discussed in Data Streams: Algorithms and Applications by Muthukrishnan which includes negative updates as well and the data structure can be tuned to handle each of these variants. • The core of the data structure is a 2 dimensional array count[w, d] that stores the synopses of the original vector and which is used to report results of queries using approximation techniques. Hence the total space requirement of the data structure is (w * d). We can bound each of w and d in terms of our parameters ε and δ and control the level of accuracy that we want our data structure to serve. • The data structure uses hashing techniques to process these updates and report queries using sublinear space. So assume we have d pairwise-independent hash functions {h1 .. hd} that hash each of our inputs to the range (1 .. w). Just for the more curious mind, pairwise independence is a method to construct a universal hash family, a technique that ensures lower number of collisions in the hash implementation. • When an update (it, ct) comes for the stream, we hash a[it] through each of the hash functions h1 .. hd and increment each of the w entries in the array that they hash to. • for i = 1 to d v = h(i)(a[it]) // v is between 1 and w count[i, v] += ct // increment the cell count by ct end At any point in time if we want to know the approximate value of an element a[i] of the vector a, we can get it from computing the minimum of all values in each of the d cells of count where i hashes to. This can be proved formally. But the general intuition is that since we are using hash functions there's always a possibility of multiple i's colliding on to the same cell and contributing additively to the value of the cell. Hence the minimum among all hash values is the closest candidate to give us the correct result for the query. The figure above shows the processing of the updates in a Count-Min sketch. This is typically called the Point Query that returns an approximation of a[i]. Similarly we can use a Count-Min sketch to get approximation queries for ranges which is typically a summation over multiple point queries. Another interesting application is to serve inner product queries where the data structure is used to query inner products of 2 vectors, a typical application of this being the estimation of join sizes in relational query processing. The paper Statistical Analysis of Sketch Estimators gives all details of how to use sketching as a technique for this. Count-Min sketches have some great properties which make them a very useful data structure when processing distributed streams. They have associativity properties and can be modelled as monoids and hence terribly performant in a distributed environment where you can parallelize sketch operations. In a future post I will discuss some implementation techniques and how we can use count-min sketches to serve some useful applications over data streams. Meanwhile Twitter's algebird and ClearSpring's stream-lib offer implementations of Count-Min sketch and various other data structures applicable for stream mining applications. ## January 15, 2014 ### Functional Jobs #### Back-End Developer at Bench Accounting (Full-time) If you have a finely honed appreciation for elegant code, read on! What's Bench? - Check us out in TechCrunch and The New York Times! We're a laid back group of people working hard on a tough problem. Bookkeeping is a universal point of pain for passionate people trying to pursue their dreams. They want to run their businesses, not do accounting, so we make that disappear. While the sex appeal of the problem may seem lacking, we genuinely believe that solving society's big problems is super sexy. And so are you. Bench is ready to go. We recently finished the TechStars accelerator in NYC and raised a round of venture capital. We've built an early product and have a fanbase of early customers. We created our dream workspace, and opened offices in Manhattan and Gastown, Vancouver. Also there are a bunch of super-secret-holy-$#|7-that's-awesome announcements coming down the pipe.

When you throw in with us you're not only joining a cool company working on a wicked web app. You're joining at the inflection point – possibly the most exciting time to join any startup. This is an extraordinary leg of our journey, and becoming instrumental to the team now would pretty much make us best friends forever.

Here are some of the things you'll be getting up to:

Building our API's for internal and external use, keeping Bench's codebase a delight to work with. Architecting how all of our systems fit together. Writing algorithms to solve accounting puzzles. Experimenting with machine-learning to replace the human intelligence currently required to offer our services. Working on tight integration with our front-end team to make sure performance is great and people don't resent our loading spinner.

Essential Stuff

Unquestionable competence in Java and/or Scala. Rock solid command of web engineering best practices. Great communication skills both in person and in writing.

Optional / Total Boss Stuff

Experience with and opinions on back-end technologies like Camel, NoSQL, Akka, Spray. Experience with machine learning and/or NLP. Thoughtful opinions on the merits of sci-fi vs. fantasy and/or tolerance for overhearing similar conversation topics.

Get information on how to apply for this position.

## January 11, 2014

### Functional Jobs

#### Software engineer - founding team - Scala, Python at TrueAccord (Full-time)

Who are we looking for?

The ideal person cares about our mission. TrueAccord is taking on a huge market with a social mission. We’re not looking for code monkeys – we’re looking for engineers to join the founding team and help us define the way we operate, while solving a problem that many have, but is difficult to manage.

We are looking for independent thinkers and problem solvers. Large parts of our systems aren’t yet built or are rudimentary and we’re looking for people who can take on building a major component while using our services and tools. If you enjoy building and designing but can handle detailed code reviews and relentless bug killing, we’d like to talk to you.

We’re looking for opinionated people. We’re passionate about our work and we have strong opinions. We also back our opinions with data and are rigorous in testing. Gut feeling is respected but you should be ready to be challenged regarding your convictions and ideas. If you enjoy a spirited discussion, we’re a good fit.

We’re looking for strong craftsmen (and women). We use Scala, not because it’s cool but because we believe in it. We use Django since it’s a strong and proven framework. We use AngularJS because it’s flexible and gives our operators a lot of freedom. Our choice of tools is driven by our need and our strong technical capabilities. If you’re not simply drawn to the shiny new toy, this team is for you.

Qualifications:

• You are a strong engineer eager to be part of a founding team, with strong opinions about technology and methodology.
• You are comfortable with working with analysts, operators and other non-engineering staff.
• Experience in multiple languages and at least one compiled language is a must. Experience in Scala and Python is a plus.
• Background in Machine Learning or large-scale data collection and manipulation is a big plus.
• 2-3 years of coding experience working on an engineering-heavy product, especially one that you build yourself.
• Must be authorized to work in the US.

Get information on how to apply for this position.

## January 05, 2014

### Richard Dallaway

#### Fun with CRDTs

At the end of last year I had some fun implementing a CRDT. These are data structures designed to combine together when you have no control over order of changes, timing of changes, or the number of participants in the data structure. The example I looked at was a sequential datatype, namely the WOOT CRDT for collaborative text editing.

The presentation is below, but you can watch the video of the talk I gave at Scala eXchange 2013.

<script async="async" class="speakerdeck-embed" data-id="2f9659803e480131084a06af3ff5da10" data-ratio="1.33333333333333" src="http://speakerdeck.com/assets/embed.js"> </script>

Before you do that, you might want to watch my colleague Noel's talk on Reconciling Eventually-Consistent Data with CRDTs which was, not coincidently, right before mine.

If you find yourself looking at a network and thinking "how can I reliably combine these things?" without global synchronised clocks, do have a look at CRDTs because they are #fun and #interesting.

## December 23, 2013

### Functional Jobs

#### Senior Consultant at Tindr Solutions (Full-time)

We're looking for a great developer who communicates extremely well and who likes to travel. If you love to talk to people, are a great coder, want to learn the hottest tech around right now and are itching to get your airline and hotel status, drop us a line.

Note that this position requires 50-80% travel within North America. You must be a U.S. or Canadian Citizen or able to easily acquire a U.S. work VISA.

Want to use Play with Java? This is a great opportunity to build on your existing programming skills and apply the framework that is revolutionizing the way people build and deploy incredible web applications. An intermediate to expert ability with the Play! framework is required.

Do you love the feeling you get from solving a problem no one else seems to be able figure out? Are you experiences in Java or Play and looking for challenging work in a tight knit, dynamic, fun environment? If so, Tindr is the place for you.

We're currently looking for a senior consultant to join our team. You'll work with cutting edge frameworks and some very cool people.

What You'll Do:

• Develop using a bunch of nifty frameworks like Play! and Akka
• Release solid code, early and often
• Continuous delivery / continuous
deployment the agile delivery
experience
• Responsible for Typesafe Reactive Platform training and consulting with clients
• Work with large enterprises to integrate the Typesafe stack into their daily lives
• Manage your own time and potentially that of a team to get deliverables in on time
• Work directly with clients to
understand their needs and their
challenges and work with them to
solve them using the Typesafe
platform
• Travel often, and bring your
passion with you to enable our
clients to adopt these technologies

What we're looking for:

We're looking for a senior web developer with consulting experience who possessed strong code-foo, has a commitment to quality code, and who loves to travel and interact with others.

Desired Skills:

• Experienced programmer in Play with Java, or other high level language
• Play Framework, Play2, Production Server setup (including monitoring), Akka Actors
• Experience with Scala is considered an asset
• Analytics

Tindr is a custom software development and software outsourcing firm specializing in Scala, Akka, Play, .NET and SOA. We are a fresh, innovative and exciting company poised to take on the world. We’re the kids who used to take everything apart just to see how it worked. We’re looking for people who are as passionate about programming as they are about creating great products. We believe in taking on interesting projects and in sharing the wealth.

Get information on how to apply for this position.

#### Developer / Project Manager at Tindr Solutions (Full-time)

We’re looking for a strong developer who communicates extremely well and who likes to travel. If you love to talk to people, are a great coder, want to learn the hottest tech around right now and are itching to get your airline and hotel status, drop us a line.

Note that this position requires extensive travel between Ottawa (30%) and San Francisco (70%). You must be a U.S. or Canadian Citizen or able to easily acquire a U.S. work VISA.

Do you want to learn Scala? This is a great opportunity to build on your existing programming skills and learn the language that is revolutionizing the way people build scalable applications. You don’t need to know the language coming in, just have strong aptitude in software development and we’ll get you mentored and trained up.

Do you love the feeling you get from solving a problem no one else seems to be able figure out? Are you a Java or Scala hacker who prefers code to XML? Are you looking for a challenging work term in a tight knit, dynamic, fun environment? If so, Tindr is the place for you.

We’re currently looking for a developer / project manager to join our team. You’ll work with cutting edge frameworks and some very cool people.

What You’ll Do:

• Develop using Scala and a bunch of nifty frameworks like Play!, Akka
• Release solid code, early and often
• Work with large enterprises to integrate the Typesafe stack into their daily lives
• Manage your own time and potentially that of a team to get deliverables in on time
• Work directly with clients to understand their needs and their challenges and work with them to solve them using the Typesafe platform
• If you have the interest, train people on Scala in a formal setting

What You’ve Done:

• Experience programming in Scala, Java, .NET or other high level language
• Strong code-foo
• Understanding of relational database design
• Commitment to quality code
• Familiarity with Javascript framework(s)
• Familiarity with Scala, Play and Akka is a bonus

Tindr is a custom software development and software outsourcing firm specializing in Scala, Akka, Play, .NET and SOA. We are a fresh, innovative and exciting company poised to take on the world. We’re the kids who used to take everything apart just to see how it worked. We’re looking for people who are as passionate about programming as they are about creating great products. We believe in taking on interesting projects and in sharing the wealth.

Get information on how to apply for this position.

#### Senior Developer / Consultant at Tindr Solutions (Full-time)

We're looking for a great developer who communicates extremely well and who likes to travel. If you love to talk to people, are a great coder, want to learn the hottest tech around right now and are itching to get your airline and hotel status, drop us a line.

Note that this position requires travel. You must be a U.S. or Canadian Citizen or able to easily acquire a U.S. work VISA.

Want to use Play with Java? This is a great opportunity to build on your existing programming skills and apply the framework that is revolutionizing the way people build and deploy incredible web applications. An intermediate to expert ability with the Play! framework is required.

Do you love the feeling you get from solving a problem no one else seems to be able figure out? Are you experiences in Java or Play and looking for challenging work in a tight knit, dynamic, fun environment? If so, Tindr is the place for you.

We're currently looking for a senior consultant to join our team. You'll work with cutting edge frameworks and some very cool people.

What You'll Do:

• Develop using a bunch of nifty frameworks like Play! and Akka
• Release solid code, early and often
• Continuous delivery / continuous
deployment the agile delivery
experience
• Responsible for Typesafe Reactive Platform training and consulting with clients
• Work with large enterprises to integrate the Typesafe stack into their daily lives
• Manage your own time and potentially that of a team to get deliverables in on time
• Work directly with clients to
understand their needs and their
challenges and work with them to
solve them using the Typesafe
platform
• Travel often, and bring your
passion with you to enable our
clients to adopt these technologies

What we're looking for:

We're looking for a senior web developer with consulting experience who possessed strong code-foo, has a commitment to quality code, and who loves to travel and interact with others.

Desired Skills:

• Experienced programmer in Play with Java, or other high level language
• Play Framework, Play2, Production Server setup (including monitoring), Akka Actors
• Experience with Scala is considered an asset
• Analytics

Tindr is a custom software development and software outsourcing firm specializing in Scala, Akka, Play, .NET and SOA. We are a fresh, innovative and exciting company poised to take on the world. We’re the kids who used to take everything apart just to see how it worked. We’re looking for people who are as passionate about programming as they are about creating great products. We believe in taking on interesting projects and in sharing the wealth.

Get information on how to apply for this position.

## December 20, 2013

### Functional Jobs

#### Clojure Data Scientist at Lazada (Full-time)

Summary

Lazada operates in South East Asia with Amazon's business model (the everything store). We have millions of customers and raised a total of a couple hundred million USD to grow further and faster. Another company in our investors' portfolio has seen enormous success creating a data science team in Haskell, so we are in the process of building our own. Our weapon of choice is Clojure.

Details

It's a crossover between the traditional insights based work, and AI-flavored software engineering, as often a durable solution will need production code. At the very beginning, you will need to build the necessary infrastructure such as a data mart and servers on which you can build your work, so you need to be at least comfortable with dev ops work as well as the data science work. You can do this with complete freedom of tech stack (an example could be NixOS for purely functional package deployment on the AWS cloud).

Location: Singapore - we are happy to relocate you from anywhere in the world. No project management structure - work directly with the users, and we provide only the inputs ("here's the data") and outputs ("we need to solve this problem"), you figure out the middle bit. No corporate bureaucracy - wear shorts to work and turn up at lunchtime if you want. MBP 15" Retina or equivalent hardware will be happily provided, compensation will be competitive and in hard cash.

Get information on how to apply for this position.

#### Chief Data Scientist at Lazada (Full-time)

Summary

Lazada operates in South East Asia with Amazon's business model (the everything store). We have millions of customers and raised a total of a couple hundred million USD to grow further and faster. Another company in our investors' portfolio has seen enormous success creating a data science team in Haskell, so we are in the process of building our own. The first hire will be the most important and this is what this ad is about: coming in to build and lead the team. Accepted languages are Haskell, Clojure, Agda and potentially Scala.

Details

It's a crossover between the traditional insights based work, and AI-flavored software engineering, as often a durable solution will need production code. At the very beginning, you will need to build the necessary infrastructure such as a data mart and servers on which you can build your work, so you need to be at least comfortable with dev ops work as well as the more interesting data science work. You can do this with complete freedom of tech stack (an example could be NixOS for purely functional package deployment on the AWS cloud).

Location: Singapore or Bangkok - we are happy to relocate you from anywhere in the world. No project management structure - work directly with the users, and we provide only the inputs ("here's the data") and outputs ("we need to solve this problem"), you figure out the middle bit. No corporate bureaucracy - wear shorts to work and turn up at lunchtime if you want. MBP 15" Retina or equivalent hardware will be happily provided, compensation will be competitive and in hard cash.

How to apply

We estimate ability, and will measure your future performance, based on your code. As such, please solve the task below in your own time and email your code and results to pawel dot kuznicki at lazada dot com. Pawel will get back to you within a day.

Get information on how to apply for this position.

## December 19, 2013

### scala-lang.org

#### Scala 2.10.4-RC1 is now available!

We are very happy to announce the first release candidate of Scala 2.10.4! If no serious blocking issues are found this will become the final 2.10.4 version.

The release is available for download from scala-lang.org or from Maven Central.

The Scala team and contributors fixed 23 issues since 2.10.3!

In total, 39 RC1 pull requests were merged on GitHub.

### Known Issues

Before reporting a bug, please have a look at these known issues.

### Scala IDE for Eclipse

The Scala IDE with this release built right in is available through the following update-site:

Have a look at the getting started guide for more info.

### New features in the 2.10 series

Since 2.10.4 is strictly a bug-fix release, here’s an overview of the most prominent new features and improvements as introduced in 2.10.0:

• Value Classes

• Implicit Classes

• String Interpolation

• Futures and Promises

• Dynamic and applyDynamic

• Dependent method types:

• def identity(x: AnyRef): x.type = x // the return type says we return exactly what we got
• New ByteCode emitter based on ASM

• Can target JDK 1.5, 1.6 and 1.7

• Emits 1.6 bytecode by default

• Old 1.5 backend is deprecated

• A new Pattern Matcher

• rewritten from scratch to generate more robust code (no more exponential blow-up!)

• code generation and analyses are now independent (the latter can be turned off with -Xno-patmat-analysis)

• Implicits (-implicits flag)

• Diagrams (-diagrams flag, requires graphviz)

• Groups (-groups)

• Modularized Language features

• Parallel Collections are now configurable with custom thread pools

• Akka Actors now part of the distribution

• scala.actors have been deprecated and the akka implementation is now included in the distribution.

• See the actors migration project for more information.

• Performance Improvements

• Faster inliner

• Range#sum is now O(1)

• Update of ForkJoin library

• Fixes in immutable TreeSet/TreeMap

• Improvements to PartialFunctions

• Addition of ??? and NotImplementedError

• Addition of IsTraversableOnce + IsTraversableLike type classes for extension methods

• Deprecations and cleanup

• Floating point and octal literal syntax deprecation

• Removed scala.dbc

### Experimental features

The API is subject to (possibly major) changes in the 2.11.x series, but don’t let that stop you from experimenting with them! A lot of developers have already come up with very cool applications for them. Some examples can be seen at http://scalamacros.org/news/2012/11/05/status-update.html.

#### A big thank you to all the contributors!

#Author
16<notextile>Jason Zaugg</notextile>
3<notextile>Simon Schaefer</notextile>
3<notextile>Eugene Burmako</notextile>
3<notextile>Luc Bourlier</notextile>
2<notextile>Som Snytt</notextile>
2<notextile>Paul Phillips</notextile>
2<notextile>Mirco Dotta</notextile>
1<notextile>Mark Harrah</notextile>
1<notextile>Heather Miller</notextile>
1<notextile>FrancÃßois Garillot</notextile>
1<notextile>Roberto Tyley</notextile>

#### Commits and the issues they fixed since v2.10.3

Issue(s)CommitMessage
SI-642647562e7adb<notextile>Revert "SI-6426, importable _."</notextile>
SI-8062f0d913b51d<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
SI-7912006e2f2aad<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile>
SI-8060bb427a3416<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
SI-79955ed834e251<notextile>SI-7995 completion imported vars and vals</notextile>
SI-8019c955cf4c2e<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
SI-8029fdcc262070<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
SI-74398d74fa0242<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile>
SI-80109036f774bc<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
SI-79827d4109486b<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
SI-691370634395a4<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
SI-745802308c9691<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
SI-7548652b3b4b9d<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
SI-7548b7509c922f<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
SI-80053629b645cc<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
SI-8004696545d53f<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
SI-7463, SI-8003b915f440eb<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
SI-7280053a2744c6<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
SI-791504df2e48e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
SI-7776d15ed081ef<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
SI-6546075f6f260c<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
SI-7638, SI-4012e09a8a2b7f<notextile>SI-4012 Mixin and specialization work well</notextile>
SI-751950c8b39ec4<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
SI-7519ce74bb0060<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
SI-4936, SI-6026e350bd2cbc<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
SI-60262bfe0e797c<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
SI-729525bcba59ce<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
SI-70207b560213cb<notextile>Disable tests for SI-7020</notextile>
SI-77832ccbfa5778<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
SI-7815733b3220c9<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

#### Complete commit list!

shaTitle
3fa2c97853<notextile>Report error on code size overflow, log method name.</notextile>
2aa9da578e<notextile>Partially revert f8d8f7d08d.</notextile>
47562e7adb<notextile>Revert "SI-6426, importable _."</notextile>
f0d913b51d<notextile>SI-8062 Fix inliner cycle with recursion, separate compilation</notextile>
9cdbe28c00<notextile>Fixup #3248 missed a spot in pack.xml</notextile>
006e2f2aad<notextile>SI-7912 Be defensive calling toString in MatchError#getMessage</notextile>
bb427a3416<notextile>SI-8060 Avoid infinite loop with higher kinded type alias</notextile>
e555106070<notextile>Remove docs/examples; they reside at scala/scala-dist</notextile>
dc6dd58d9d<notextile>Remove unused android test and corresponding license.</notextile>
f8d8f7d08d<notextile>Do not distribute partest and its dependencies.</notextile>
5ed834e251<notextile>SI-7995 completion imported vars and vals</notextile>
c955cf4c2e<notextile>SI-8019 Make Publisher check PartialFunction is defined for Event</notextile>
fdcc262070<notextile>SI-8029 Avoid multi-run cyclic error with companions, package object</notextile>
8d74fa0242<notextile>[backport] SI-7439 Avoid NPE in isMonomorphicType with stub symbols.</notextile>
9036f774bc<notextile>SI-8010 Fix regression in erasure double definition checks</notextile>
3faa2eedd8<notextile>[nomaster] better error messages for various macro definition errors</notextile>
7d4109486b<notextile>SI-7982 Changed contract of askLoadedType to unload units by default</notextile>
70634395a4<notextile>SI-6913 Fixing semantics of Future fallbackTo to be according to docs</notextile>
02308c9691<notextile>SI-7458 Pres. compiler must not observe trees in silent mode</notextile>
652b3b4b9d<notextile>SI-7548 Test to demonstrate residual exploratory typing bug</notextile>
b7509c922f<notextile>SI-7548 askTypeAt returns the same type whether the source was fully or targeted</notextile>
0c963c9085<notextile>[nomaster] teaches toolbox about -Yrangepos</notextile>
3629b645cc<notextile>SI-8005 Fixes NoPositon error for updateDynamic calls</notextile>
696545d53f<notextile>SI-8004 Resolve NoPosition error for applyDynamicNamed method call</notextile>
b915f440eb<notextile>SI-7463,SI-8003 Correct wrong position for {select,apply}Dynamic calls</notextile>
053a2744c6<notextile>[nomaster] SI-7280 Scope completion not returning members provided by imports</notextile>
eb9f0f7975<notextile>[nomaster] Adds test cases for scope completion</notextile>
3a8796da1a<notextile>[nomaster] Test infrastructure for scope completion</notextile>
04df2e48e4<notextile>SI-7915 Corrected range positions created during default args expansion</notextile>
ec89b59717<notextile>Upgrade pax-url-aether to 1.6.0.</notextile>
1d29c0a08b<notextile>[backport] Add buildcharacter.properties to .gitignore.</notextile>
852a9479d0<notextile>Allow retrieving STARR from non-standard repo for PR validation</notextile>
40af1e0c44<notextile>Allow publishing only core (pr validation)</notextile>
ba0718fd1d<notextile>Render relevant properties to buildcharacter.properties</notextile>
d15ed081ef<notextile>[backport] SI-7776 post-erasure signature clashes are now macro-aware</notextile>
6045a05b83<notextile>Fix completion after application with implicit arguments</notextile>
075f6f260c<notextile>SI-6546 InnerClasses attribute refers to absent class</notextile>
e09a8a2b7f<notextile>SI-4012 Mixin and specialization work well</notextile>
50c8b39ec4<notextile>SI-7519: Additional test case covering sbt/sbt#914</notextile>
ce74bb0060<notextile>[nomaster] SI-7519 Less brutal attribute resetting in adapt fallback</notextile>
e350bd2cbc<notextile>[nomaster] SI-6026 backport getResource bug fix</notextile>
2bfe0e797c<notextile>SI-6026 REPL checks for javap before tools.jar</notextile>
25bcba59ce<notextile>SI-7295 Fix windows batch file with args containing parentheses</notextile>
7b560213cb<notextile>Disable tests for SI-7020</notextile>
8986ee4fd5<notextile>Disable flaky presentation compiler test.</notextile>
2ccbfa5778<notextile>SI-7783 Don't issue deprecation warnings for inferred TypeTrees</notextile>
ee9138e99e<notextile>Bump version to 2.10.4 for nightlies</notextile>
733b3220c9<notextile>SI-7815 Dealias before deeming method type as dependent</notextile>

## December 09, 2013

### Eric Torreborre

#### The revenge of the chunks

<status class="ok">

</status><status class="ok">

This series of posts feels like a whole saga for something which should have a quick an easy way to demonstrate the obvious superiority of functional programming over a simple loop. In the first post. Then the second post was about defining proper scalaz-stream combinators to do the same thing, and particularly how to "chunk" the processing in order to get good performances.

However as I was writing unit tests for my requirements I realized that the problem was harder than I thought. In particular, the files I'm processing can have several sections made of HEADERs and TRAILERs. When you create chunks of lines to process this results in a number of combinations that need to be analysed. A chunk can:

• start with a HEADER but not finish with a TRAILER which is in another chunk
• contain lines only
• contains lines + a TRAILER + a new HEADER
• and so on...

For each of these cases it is necessary to use the current state and the contents of the lines to determine if the file is malformed or not. This is a lot less easy that previously.

### All the combinations

This is what I came up with:

  def process(path: String, targetName: String, chunkSize: Int = 10000): String \/ File = {    val targetPath = path.replace(".DAT", "")+targetName    val read =       linesRChunk(path, chunkSize) |>       validateLines.map(lines => lines.mkString("\n"))    val task =       ((read |> process1.intersperse("\n") |>       process1.utf8Encode) to io.fileChunkW(targetPath)).run    task.attemptRun.leftMap(_.getMessage).map(_ => new File(targetPath))  }  /**   * validate that the lines have the right sequence of HEADER/column names/lines/TRAILER   * and the right number of lines   */  def validateLines: Process1[Vector[String], Vector[String]] = {    // feed lines into the lines parser with a given state    // when it's done, follow by parsing with a new state    def parse(lines: Vector[String], state: LineState, newState: LineState) =      emit(lines) |> linesParser(state) fby linesParser(newState)    // parse chunks of lines    def linesParser(state: LineState): Process1[Vector[String], Vector[String]] = {      receive1[Vector[String], Vector[String]] { case lines =>        lines match {          case first +: rest if isHeader(first) =>            if (state.openedSection) fail("A trailer is missing")            else              parse(lines.drop(2),                    state.open,                    LineState(lines.count(isHeader) > lines.count(isTrailer),                               lines.drop(2).size))          case first +: rest if isTrailer(first) =>            val expected = "\\d+".r.findFirstIn(first).map(_.toInt).getOrElse(0)            if (!state.openedSection)                           fail("A header is missing")            else if (state.lineCount != expected)               fail(s"expected $expected lines, got${state.lineCount}")            else {              val dropped = lines.drop(1)              parse(dropped,                    state.restart,                    LineState(dropped.count(isHeader) > dropped.count(isTrailer),                               dropped.size))            }          case first +: rest =>            if (!state.openedSection) fail("A header is missing")            else {              val (first, rest) = lines.span(line => !isTrailer(line))              emit(first) fby              parse(rest, state.addLines(first.size), state.addLines(lines.size))            }          case Vector() => halt        }      }    }    // initialise the parsing expecting a HEADER    linesParser(LineState())  }  private def fail(message: String) = Halt(new Exception(message))  private def isHeader(line: String) = line.startsWith("HEADER|")  private def isTrailer(line: String) = line.startsWith("TRAILER|")

The bulk of the code is the validateLines process which verifies the file structure:

• if the first line of this chunk is a HEADER the next line needs to be skipped, we know we opened a new section, and we feed the rest to the lines parser again. However we fail the process if we were not expecting a HEADER there

• if the first line of this chunk is a TRAILER we do something similar but we also check the expected number of lines

• otherwise we try to emit as many lines as possible until the next HEADER or TRAILER and we recurse

This is a bit complex because we need to analyse the first element of the chunk, then emit the rest and calculate the new state we will have when this whole chunk is emitted. On the other hand the processor is easy to test because I don't have to read or write files to check it. This would be a bit more difficult to do with the loop version.

But unfortunately not all the tests are green. One is still not passing. What if there is no ending TRAILER in the file? How can I raise an exception? There's no process to run, because there are no more lines to process! My test is pending for now, and I'll post the solution once I have it (maybe there's a smarter way to rewrite all of this?).

### Is it worth it?

This was definitely worth it for me in terms of learning the scalaz-stream library. However in terms of pure programmer "productivity", for this kind of requirements, it feels like an overkill. The imperative solution is very easy to come up with and there is no problems with performances. This should change once streaming parsing is available (see the roadmap). Probably this use case will just be expressed as a one-liner. In the light of this post I'm just curious how the implementation will deal with chunking.

</status>

#### runState 0 - combinators 1

<status class="ok">

</status><status class="ok">

In my previous blog post I was trying to implement a runState method with scalaz-stream to process a file and try to validate its internal structure. That was however not a good solution because:

• it doesn't use combinators but a special purpose runState method
• it stackoverflows on large files!

It turns out that there is a much better way of dealing with this use case.

### Combinators

First of all it is possible to propagate some state with scalaz-stream without having to write a special runState method. The following uses only combinators to do the job:

def process(path: String, targetName: String): String \/ File = {  val HEADER  = "HEADER(.*)".r  val TRAILER = "TRAILER\\|(\\d+)".r  val lineOrTrailer: Process1[String, String]  = {    def go(lines: Int): Process1[String, String] =      receive1[String, String] {        case TRAILER(count) =>           if (count.toInt == lines) halt           else Halt(new Exception(s"Expected $count lines, but got$lines"))        case HEADER(h)      =>           Halt(new Exception(s"Didn't expected a HEADER here: $h")) case s => emit(s) fby go(lines + 1) } go(0) } val linesStructure = discardRegex("HEADER.*") fby discardLine fby lineOrTrailer val read = io.linesR(path) |> linesStructure val targetPath = path.replace(".DAT", "")+targetName val task = ((read |> process1.intersperse("\n") |> process1.utf8Encode) to io.fileChunkW(targetPath)).run task.attemptRun.leftMap(_.getMessage).map(_ => new File(targetPath))}val discardLine = receive1[String, String] { _ => halt }/** discard a line if it matches the expected pattern */def discardRegex(pattern: String): Process1[String,String] = { val compiled = Pattern.compile(pattern) receive1[String, String] { line => if (compiled.matcher(line).matches) halt else Halt(new Exception(s"Failed to parse$line, does not match regex: $pattern")) }} With the code above, processing a file amounts to: • reading the lines • analysing them with linesStructure which propagates the current state, the number of lines already processed, with a recursive method (go) calling itself • writing the lines to a new file The linesStructure method almost looks like a parser combinators expression with parsers sequenced with the fby ("followed by") method. That looks pretty good but... it performs horribly! With the good-old "loop school", it took 8 seconds to process a 700M file: def processLoop(path: String, targetName: String): String \/ File = { val targetPath = path.replace(".DAT", "")+targetName val writer = new FileWriter(targetPath) val source = scala.io.Source.fromFile(new File(path)) var count = 0 var skipNextLine = false try { source.getLines().foreach { line => if (line.startsWith("HEADER")) skipNextLine = true else if (skipNextLine) { skipNextLine = false } else if (line.startsWith("TRAILER")) { val expected = line.drop(8).headOption.map(_.toInt).getOrElse(0) if (expected != count) throw new Exception(s"expected$expected, got $count") } else { count = count + 1 writer.write(line) } } } catch { case t: Throwable => t.getMessage.left } finally { source.close writer.close } new File(targetPath).right} With the nice, "no-variables, no loop", method it took almost,... 8 minutes! ### Chunky streaming It is fortunately possible to retrieve correct performances by "chunking" the lines before processing them. To do this, we need a new combinator, very close to the io.linesR combinator in scalaz-stream: // read a file, returning one "chunk" of lines at the timedef linesRChunk(filename: String, chunkSize: Int = 10000): Process[Task, Vector[String]] = io.resource(Task.delay(scala.io.Source.fromFile(filename)))(src => Task.delay(src.close)) { src => lazy val lines = src.getLines.sliding(chunkSize, chunkSize) // A stateful iterator Task.delay { if (lines.hasNext) lines.next.toVector else throw End } } Now we can process each chunk with: def process(path: String, targetName: String, bufferSize: Int = 1): String \/ File = { val HEADER = "HEADER(.*)".r val TRAILER = "TRAILER\\|(\\d+)".r def linesParser(state: LineState): Process1[Vector[String], Vector[String]] = { def onHeader(rest: Vector[String]) = (emit(rest) |> linesParser(ExpectLineOrTrailer(0))) fby linesParser(ExpectLineOrTrailer(rest.size)) def onLines(ls: Vector[String], actual: Int) = emit(ls) fby linesParser(ExpectLineOrTrailer(actual + ls.size)) def onTrailer(ls: Vector[String], count: Int, actual: Int) = if ((actual + ls.size) == count) emit(ls) else fail(new Exception(s"expected$count lines, got $actual")) receive1[Vector[String], Vector[String]] { case lines => (lines, state) match { case (Vector(), _) => halt case (HEADER(_) +: cols +: rest, ExpectHeader) => onHeader(rest) case (_, ExpectHeader) => fail(new Exception("expected a header")) case (ls :+ TRAILER(count), ExpectLineOrTrailer(n)) => onTrailer(ls, count.toInt, n) case (ls, ExpectLineOrTrailer(n)) => onLines(ls, n) } } } val targetPath = path.replace(".DAT", "")+targetName val read = linesRChunk(path, bufferSize) |> linesParser(ExpectHeader).map(lines => lines.mkString("\n")) val task = ((read |> process1.intersperse("\n") |> process1.utf8Encode) to io.fileChunkW(targetPath)).run task.attemptRun.leftMap(_.getMessage).map(_ => new File(targetPath))} The linesParser method uses receive1 to analyse: • the current state: are we expecting a HEADER, or some lines followed by a TRAILER? • the current chunk of lines When we expect a HEADER and we have one, we skip the row containing the column names (see onHeader), we emit the rest of the lines to the linesParser (this is the recursive call) and we change the state to ExpectLineOrTrailer. If we get some lines with no TRAILER, we emit those lines and make a recursive call to linesParser with an incremented count to signal how many lines we've emitted so far (in the onLines method). Finally, if we get some lines and a TRAILER we check that the expected number of lines is equal to the actual one before emitting the lines and stopping the processing (no more recursive call in onTrailer). For reference, here are the state objects used to track the current processing state: sealed trait LineStatecase object ExpectHeader extends LineStatecase class ExpectLineOrTrailer(lineCount: Int = 0) extends LineState This new way of processing lines gets us: • a readable state machine with clear transitions, which was my first objective • adequate performances; it takes around 10 seconds to process a 700M file which is slightly more than the processLoop version but acceptable ### One other explored avenue It took me a loooooooooooong time to get there. I think I hit this issue when trying to use the built-in chunk combinator. When using chunk, my parser was being fed the same lines several times. For a chunk of 10 lines, I first had the first line, then the first 2, then the first 3,... Even with a modified version of chunk the performances were still very bad. This is why I wrote my own linesRChunk. Now I got something working I hope that this will boost other's development time and show that it is possible to avoid loops + variables in that case! </status> ## December 06, 2013 ### Coderspiel #### "Bored with all those holiday parties where category theory is scarcely mentioned, if at all? Well..." “Bored with all those holiday parties where category theory is scarcely mentioned, if at all? Well hold on to your hats, because this party is NOT CONTAINED IN THAT SET.” - Holiday Party - ny-scala (New York , NY) - Meetup ## December 05, 2013 ### Eric Torreborre #### runState for a scalaz-stream Process <status class="ok"> </status><status class="ok"> I was preparing to post this on the scalaz mailing-list but I thought that a short blog post could serve as a reference for other people as well. The following assumes that you have a good knowledge of Scalaz (at least of what's covered in my "Essence of the Iterator Pattern" post and some familiarity with the scalaz-stream library. ### My use case What I want to do is very common, just process a bunch of files! More precisely I want to (this is slightly simplified): 1. read some pipe delimited files 2. validate that the files have the proper internal structure: one(HEADER marker) one(column names) many(lines of pipe delimited values) one(TRAILER marker with total number of lines since the header) 3. output only the lines which are not markers to another file ### Scalaz stream The excellent chapter 15 of Functional Programming in Scala highlights some of the potential problems with processing files: • you need to make sure you are closing resources properly even in the face of exceptions • you want to be able to easily compose small processing functions together instead of having a gigantic loop and a bunch of variables • you want to control the amount of data that is in memory at any moment in time Based on the ideas of the book, Paul Chiusano created scalaz-stream, a library providing lots of combinators for doing this kind of input/output streaming operations (and more!). ### A state machine for the job My starting point for addressing our requirements is to devise a State object representing the both the expected file structure and the fact that some lines need to be filtered out. First of all I need to model the kind of lines I'm expecting when reading the file: sealed trait LineStatecase object ExpectHeader extends LineStatecase object ExpectHeaderColumns extends LineStatecase class ExpectLineOrTrailer(lineCount: Int = 0) extends LineState As you can see ExpectLineOrTrailer contains a counter to keep track of the number of lines seen so far. Then I need a method (referred as the State function below) to update this state when reading a new line: def lineState(line: String): State[Throwable \/ LineState, Option[String]] = State { state: Throwable \/ LineState => def t(message: String) = new Exception(message).left (state, line) match { case (\/-(ExpectHeader), HeaderLine(_)) => (ExpectHeaderColumns.right, None) case (\/-(ExpectHeaderColumns), _) => (ExpectLineOrTrailer(0).right, None) case (\/-(ExpectHeader), _) => (t("expecting a header"), None) case (\/-(ExpectLineOrTrailer(n)), HeaderLine(_)) => (t("expecting a line or a trailer"), None) case (\/-(ExpectLineOrTrailer(n)), TrailerLine(e)) => if (n == e) (ExpectHeader.right, None) else (t(s"wrong number of lines, expecting$e, got $n"), None) case (\/-(ExpectLineOrTrailer(n)), _) => (ExpectLineOrTrailer(n + 1).right, Some(line)) case (-\/(e), _) => (state, None) }} The S type parameter (in the State[S, A] type) used to keep track of the "state" is Throwable \/ LineState. I'm using the "Left" part of the disjunction to represent processing errors. The error type itself is a Throwable. Originally I was using any type E but we'll see further down why I had to use exceptions. The value type A I extract from State[S, A] is going to be Option[String] in order to output None when I encounter a marker line. This is all pretty good, functional and testable. But how can I use this state machine with a scalaz-stream Process? ### runState After much head scratching and a little help from the mailing-list (thanks Pavel!) I realized that I had to write a new driver for a Process. Something which would understand what to do with a State. Here is what I came up with: def runState[F[_], O, S, E <: Throwable, A](p: Process[F, O]) (f: O => State[E \/ S, Option[A]], initial: S) (implicit m: Monad[F], c: Catchable[F]) = { def go(cur: Process[F, O], init: S): F[Process[F, A]] = { cur match { case Halt(End) => m.point(Halt(End)) case Halt(e) => m.point(Halt(e)) case Emit(h: Seq[O], t: Process[F, O]) => { println("emitting lines here!") val state = h.toList.traverseS(f) val (newState, result) = state.run(init.right) newState.fold ( l => m.point(fail(l)), r => go(t, r).map(emitAll(result.toSeq.flatten) ++ _) ) } case Await(req, recv, fb: Process[F, O], cl: Process[F, O]) => m.bind (c.attempt(req.asInstanceOf[F[Any]])) { _.fold( { case End => go(fb, init) case e => go(cl.causedBy(e), init) }, o => go(recv.asInstanceOf[Any => Process[F ,O]](o), init)) } } } go(p, initial)} This deserves some comments :-) The idea is to recursively analyse what kind of Process we're currently dealing with: 1. if this is a Halt(End) we've terminated processing with no errors. We then return an empty Seq() in the context of F (hence the m.point operation). F is the monad that provides us input values so we can think of all the computations happening here as happening inside F (probably a scalaz.concurrent.Task when reading file lines) 2. if this is a Halt(error) we use the Catchable instance for F to instruct the input process what to do in the case of an error (probably close the file, clean up resources,...) 3. if this is an Emit(values, rest) we traverseS the list of values in memory with our State function and we use the initial value to get: 1. the state at the end of the traversal, 2. all the values returned by our State at each step of its execution. Note that the traversal will happen on all the values in memory, there won't be any short-circuiting if the State indicates an error. Also, this is important, the traverseS method is not trampolined. This means that we will get StackOverflow exceptions if the "chunks" that we are processing are too big. On the other hand we will avoid trampolining on each line so we should get good performances. If there was an error we stop all processing and return the error otherwise we emit all the values collected by the State appended to a recursive call to go 4. if this is an Await Process we attempt to read input values, with c.attempt, and use the recv function to process them. We can do that "inside the F monad" by using the bind (or flatMap) method. The resulting Process is sent to go in order to be processed with the State function Note what we do in case 2. when the newState returns an exception.left. We create a Process.fail process with the exception. This is why I used a Throwable to represent errors in the State function. Now let's see how to use this new "driver". ### Let's use it First of all, we create a test file: import scalaz.stream._import Process._val lines = """|HEADER|file |header1|header2 |val11|val12 |val21|val22 |val21|val22 |TRAILER|3""".stripMargin// save 100 times the lines above in a file(fill(100)(lines).intersperse("\n").pipe(process1.utf8Encode) .to(io.fileChunkW("target/file.dat")).run.run Then we read the file but we buffer 50 lines at the time to control our memory usage: val lines = io.linesR("target/file.dat").buffer(50) We're now ready to run the state function: // this task processes the lines with our State function// the initial State is ExpectHeader because this is what we expect the first line to beval stateTask: Task[Process[Task, String]] = runState(lines)(lineState, ExpectHeader)// this one outputs the lines to a result file// separating each line with a new line and encoding it in UTF-8val outputTask: Task[Unit] = stateTask.flatMap(_.intersperse("\n").pipe(process1.utf8Encode) .to(io.fileChunkW("target/result.dat")).run)// if the processing throws an Exception it will be retrieved hereval result: Throwable \/ Unit = task.attemptRun When we finally run the Task, the result is either ().right if we were able to read, process, and write back to disc or exception.left if there was any error in the meantime, including when checking if the file has a valid structure. The really cool thing about all of this is that we can now precisely control the amount of memory consumed during our processing by using the buffer method. In the example above we buffer 50 lines at the time then we process them in memory using traverseS. This is why I left a println statement in the runState method. I wanted to see "with my own eyes" how buffering was working. We could probably load more lines but the trade-off will then be that the stack that is consumed by traverseS will grow and that we might face StackOverflow exceptions. I haven't done yet any benchmark but I can imagine lots of different ways to optimise the whole thing for our use case. ### try { blog } finally { closing remarks } I'm only scratching the surface of the scalaz-stream library and there is still a big possibility that I completely misunderstood something obvious! First, it is important to say that you might not need to implement the runState method if you don't have complex validation requirements. There are 2 methods, chunkBy and chunkBy2, which allow to create "chunks" of lines based on a given line (for chunk) or pair of lines (for chunk2) naturally serving as "block" delimiters in the read file (for example a pair of "HEADER" followed by a "TRAILER" in my file). Second, it is not yet obvious to me if I should use ++ or fby when I'm emitting state-processed lines + "the rest" (in step 2 when doing: emitAll(result.toSeq.flatten) ++ _). The difference has to do with error/termination management (the fallback process of Await) and I'm still unclear on how/when to use this. Finally I would say that the scalaz-stream library is intriguing in terms of types. A process is Process[F[_], O] where O is the type of the output and the type of the input is... nowhere? Actually it is in the Await[F[_], A, O] constructor as a forall type. That's not all. In Await you have the type of request, F[A], a function to process elements of type A: recv: A => Process[F, O] but no way to extract or map the value A from the request to pass it to the recv method! The only way to do that is to provide an additional constraint to the "driver method" by saying, for example, that there is an implicit Monad[F] somewhere. This is the first time that I see a design where we build structures and then we give them properties when we want to use them. Very unusual. I hope this can help other people exploring the library and, who knows, some of this might end up being part of it. Let's see what Paul and others think... </status><status class="ok"> </status> ## November 28, 2013 ### scala-lang.org #### Announcing Scala.js v0.1 We’re excited to announce the first release of Scala.js, v0.1! Scala.js was introduced during the 4th Scala Days in June 2013, and has now reached relative stability. While we don’t yet feel that Scala.js is production-ready, we think that it nonetheless deserves its first non-snapshot release. Scala.js is a compiler from Scala to JavaScript. It allows you to write your entire web application in Scala and simply compile to JavaScript! ## Get started! All information on how to get started with Scala.js is available on the Website. Documentation, a mailing list, third-party libraries and tools, are all available. ## Noteworthy features • Support all of Scala (including macros!), modulo a few semantic differences • Very good interoperability with JavaScript code. For example, use jQuery and HTML5 from your Scala.js code, either in a typed or untyped way. Or create Scala.js objects and call their methods from JavaScript. • Integrated with sbt (including support for dependency management and incremental compilation) • Can be used with your favorite IDE for Scala • Generates Source Maps for a smooth debugging experience (step through your Scala code from within your browser supporting source maps) • Integrates Google Closure Compiler for producing minimal code for production. ## Known issues You may consult (and report) issues on GitHub. ## November 26, 2013 ### scala-lang.org #### Scala 2.11.0-M7 is now available! The seventh development milestone of Scala 2.11 is now available for download! It brings the following goodness: • delambdafication (compiling closures Java 8-style, as close as you can get on Java 6) by @jamesiry • blackbox/whitebox macro distinction by @xeno-by, • collection deprecation and mutable LongMap/AnyRefMap by @ichoran, • several IDE improvements by @dotta (positions for default args, docs on how to hack the compiler in the IDE) and @skyluc (completion for imports) • for loop support in quasiquotes by @densh • Experimental Single Abstract Method support Full details can be found on GitHub. We’re working on an overview of the Scala 2.11 releasePRs welcome! ## Known issues Scala compiler artifact (due to scaladoc) depends on the previous version (2.11.0-M6) of scala-xml and scala-parser-combinators modules. If you depend on scala-compiler (e.g., because you’re developing a macro), you should take care to exclude these _2.11.0-M6 dependencies, and provide the _2.11.0-M7 ones instead. This will be fixed in M8, which will be released before the end of the year. def excludeM6Modules(m: ModuleID) = (m exclude("org.scala-lang.modules", "scala-parser-combinators_2.11.0-M6") exclude("org.scala-lang.modules", "scala-xml_2.11.0-M6") ) // include these settings in your project: libraryDependencies += excludeM6Modules("org.scala-lang" % "scala-compiler" % scalaVersion.value), libraryDependencies += "org.scala-lang.modules" %% "scala-xml" % "1.0.0-RC7", libraryDependencies += "org.scala-lang.modules" %% "scala-parser-combinators" % "1.0.0-RC5", ## Regressions We’d love to hear about any regressions since 2.10.3 or 2.11.0-M6. Before doing so, please search for existing bugs and/or consult with the scala-user mailing list to be sure it is a genuine problem. When reporting a bug, please set the ‘Affects Version’ field to 2.11.0-M7 and add the regression label where appropriate. ## Scala IDE Lithium (4.0) for Eclipse Please point your Eclipse 4.2/4.3 at http://download.scala-ide.org/sdk/e38/scala211/dev/site/ to update to the latest version that includes this milestone! For more info, please see the getting started guide. ## Binary compatibility Note that this release is not binary compatible with the 2.10.x series, so you will need to obtain a fresh build of your dependencies against this version. ## November 24, 2013 ### Coderspiel #### Introducing Java type provider (for F#) ## November 21, 2013 ### Tim Perrett #### Free Monads, Part One This post is the first of a series; the next article will cover constructing an application using the CoProduct of Free monads (essentially putting the lego blocks together) When I was growing up, my grandmother used to tell me "There's nothing new under the sun […]". As a child, this seemed like an odd thing to say: clearly new things were invented all the time, so this couldn't be right, could it?… Of course, as we get older, one realises that indeed many "new" things are just the same thing repacked, polished or slightly altered but they are fundamentally the same thing, and my grandmother was indeed correct: rare is the occasion there's anything truly new to the world. This experience is one i'm sure is shared by many people as they grow up, and i'd like to draw an interesting parallel here with Functional Programming (FP): and over the past years I have repeatedly had these eureka moments; realising that something I was solving had indeed already solved many moons before by someone else - one simply had not made the connection between the abstraction and the problem. Today was one of these days. Like many engineers, I am gainfully employed to build large systems that feature an abundant array of non-trivial business logic, and which subsequently have many moving parts to deliver the end solution. The complexity aspect of these moving parts has always bothered me, and over time I had sought out a range of different abstractions to try and alleviate the building of such applications. However, all these solutions pretty much suck, or have some aspect of jankyness, and testing can frequently be a problem, as despite best effort, things can often become awkwardly coupled as a codebase evolves and requirements shift under you. With this frame, recently I have been investigating Free monads, and my-my, what a delightfully powerful generic abstraction these things are! In this post I will be covering how to implement the much-loved task of logging in terms of scalaz.Free. ### Domain Algebra Before we dive into any specifics about Free, we should first consider the operations necessary for the domain you want to implement, a.k.a the domain algebra. In the case of logging, the domain is of course very small, but it should be familiar to many folks: // needs to be covarient because of scalaz.Free 7.0.4; // in the 7.1 series its no longer covariant - thanks Lars! sealed trait LogF[+A] object Logging { case class Debug[A](msg: String, o: A) extends LogF[A] case class Info[A](msg: String, o: A) extends LogF[A] case class Warn[A](msg: String, o: A) extends LogF[A] case class Error[A](msg: String, o: A) extends LogF[A] }  As you can see, our "domain" simply involves the different levels of log messages, DEBUG through ERROR. The purpose here is to model every single operation in that domain as an ADT. This essentially the command concept in CQRS, which is just another name for algebra (I use this analogy as perhaps more people are familiar with CQRS). Let's look at the details a little more closely: sealed trait LogF[+A]  The LogF trait in this example really does nothing at all; it just serves as the "top level" marker, which we shortly provide a Functor for (hence being called, LogF) case class Debug[A](msg: String, o: A) extends LogF[A]  The algebra itself needs to extend LogF and take all the arguments required to execute that domain operation (in this case, a single String to print to the output, but you can imagine having a higher number of parameters to actually do something more useful). As for the o: A, this is a vehicle to make the Free abstraction work - in essence, it is the "next computation step", and we can wire that in by virtue of LogF having a Functor, like so: implicit def logFFunctor[B]: Functor[LogF] = new Functor[LogF]{ def map[A,B](fa: LogF[A])(f: A => B): LogF[B] = fa match { case Debug(msg,a) => Debug(msg,f(a)) case Info(msg,a) => Info(msg,f(a)) case Warn(msg,a) => Warn(msg,f(a)) case Error(msg,a) => Error(msg,f(a)) } }  As you can see, all this Functor instance does is take the incoming ADT and apply the function f to the A argument, which allows us to thread the computation through the ADT in a very general fashion. So this is our domain algebra - right now this is nothing more than a definition of possible operations; it is totally inert, so we need some way to interpret the possible operations, and actually do something about them; this brings us neatly onto interpreters. ### Interpreters In the domain of logging, the content to be logged is totally disjoint from what is done with that content, for example, perhaps we want to use SLF4J in production, but println whilst we're developing, or perhaps we just want the flexibility to decide later how we should actually do the logging. When designing your system in terms of domain algebra and Free, this becomes trivial, as you simply need to provide a different interpreter implementation that uses whatever implementation you fancy. Let's look at an implementation that uses println: object Println { import Logging._ import scalaz.{~>,Id}, Id.Id private def write(prefix: String, msg: String): Unit = println(s"[$prefix] \$msg")

private def debug(msg: String): Unit = write("DEBUG", msg)
private def info(msg: String): Unit  = write("INFO", msg)
private def warn(msg: String): Unit  = write("WARN", msg)
private def error(msg: String): Unit = write("ERROR", msg)

private val exe: LogF ~> Id = new (LogF ~> Id) {
def apply[B](l: LogF[B]): B = l match {
case Debug(msg,a) => { debug(msg); a }
case Info(msg,a) => { info(msg); a }
case Warn(msg,a) => { warn(msg); a }
case Error(msg,a) => { error(msg); a }
}
}

def apply[A](log: Log[A]): A =
log.runM(exe.apply[Log[A]])
}


For the most part, this should be really straightforward to read as all its doing is providing some small part of code that actually does the work of printing to the console. The part that that is of interest is the def apply[A](log: Log[A]): A method, as this is where the awesome is taking place. Notice that the argument is of type Log[A]. Until now, we have not defined this, so let's add a definition and explain it:

type Log[A] = Free[LogF, A]


So Log is just a type-alias for a Free monad on the LogF functor we defined earlier. This sounds a lot worse than it is; but in essence it just means that Log[A] is actually any constructor of Free, of which there are two options:

• Suspend - the intuition here is "stop the computation and hand control to the caller".
• Return - and similarly, "i'm done with my computation, here's the resulting value"

So, with this in mind, assuming there is a Log[A] passed in, Free defines the method runM which will recursively execute the free until reaching the Return (essentially flatMap that shit all the way down, so to speak). In order for this to happen, the caller needs to supply a function S[Free[S, A]] => M[Free[S, A]], or more specifically in terms of this example: LogF[Free[LogF, A]] => Id[Free[LogF, A]], and this is exactly the purpose of the exe value - it takes the domain algebra and executes the appropriate function in the interpreter and "threads" the A through the computation, simply by returning it in this case (as the logging is a side-effect).

Now you have the algebra for the domain, and a way to interpret that, let's add some syntax sugar so that this stuff is conveniently usable in an application.

### MOAR SUGUAARRR

It would be nice if the API would look something like:

object Main {
import Logging.log

val program: Free[LogF, Unit] =
for {
a <- log.info("fooo")
b <- log.error("OH NOES")
} yield b

def main(args: Array[String]): Unit = {
Println(program)
}
}


Well it turns out that we can conveniently achieve this by lifting the LogF instance into Free, by virtue of the LogF being a Functor… sweet!

implicit def logFToFree[A](logf: LogF[A]): Free[LogF,A] =
Suspend[LogF, A](Functor[LogF].map(logf)(a => Return[LogF, A](a)))


Then we can simply define some convenient usage methods and make the A that we are threading through a Unit, as the act of printing to the console has no usable result.

object log {
def debug(msg: String): Free[LogF, Unit] = Debug(msg, ())
def info(msg: String): Free[LogF, Unit]  = Info(msg, ())
def warn(msg: String): Free[LogF, Unit]  = Warn(msg, ())
def error(msg: String): Free[LogF, Unit] = Error(msg, ())
}


Critically, using Unit here is simply a product of having no usable value - if we wanted to make a "logger" that was entirely pure and only dumped its output to the console at the end of the application, we could simply write an interpreter that accumulated the content to log in a List[String]!

With the sugar defined, an algebra and an interpreter, all that's left is to run execute the main :-)

You can find all the code for this post over on Github.

### Functional Jobs

#### Software Developer - Functional Programming at Genetec (Full-time)

Software developers at Genetec use their technical aptitudes creatively in order to design and program new features, while working closely with the product management teams to meet customers’ expectations. They work in multidisciplinary teams driven by the desire to overcome the limits of the technology in order to deliver products of outstanding quality, beauty and creativity to the customers.

A software development career at Genetec is much more than just an opportunity to create great products; it is also an opportunity to work in a world class, talented, high energy software development team with a solid track record of creating winning products.

Roles and Responsibilities:

The current position is a code intensive position specialized in distributed applications development using functional programming and .Net technologies.

• Design and implement large scale distributed network centric applications using .NET 4.0 technologies in F#.

• Elaborate functional and architectural specifications for different features.

• Manage their time to respect milestones and delivery dates.

• Work in conjunction with software testers to fix different bugs in the product.

Requirements:

• Bachelor or Master’s degree in Computer Engineering, Software Engineering, Computer Science, Mathematics or Physics.

• Minimum of 1 year of experience in F# development or other functional languages such as Erlang, Haskell, OCaml, Scala or Scheme.

•Must be fluent in French and English.

Technical Requirements:

• Strong knowledge in functional programming

• Strong knowledge in object-oriented programming.

• Strong knowledge of multi-thread application development.

• Experience with Microsoft Visual Studio .NET 2008 or 2010.

Assets:

• Experience with the following:

o TCP/IP and protocol development

o Microsoft SQL Server programming

o Transactional and n-tier network Architectures

Get information on how to apply for this position.

### Coderspiel

#### "A diverse study group to explore Haskell, the most powerful programming language yet. Cause..."

“A diverse study group to explore Haskell, the most powerful programming language yet. Cause programming is hard and cats are busy but if they can do it so can you.”

- Haskell_For_Cats (New York , NY) - Meetup

## November 20, 2013

### Coderspiel

#### Here’s those slides. makingmeetup: On Monday we hosted a...

<script async="" class="speakerdeck-embed" data-id="4f2875d0344d0131561406dd63e41523" data-ratio="1.33333333333333" src="http://speakerdeck.com/assets/embed.js"></script>

Here’s those slides.

On Monday we hosted a tech talk on what’s coming, and why, in Meetup 2 for Android.

Leave a comment here if you have any questions. And if you’re interested in helping us bring people together on Android and other platforms, we’re hiring.

## November 19, 2013

### Coderspiel

#### Design matters, on Android

Tonight we hosted an Android meetup at Meetup. I took this fairly bad photo of Jimena, Mike, and John, from the back of the room on a Nexus 5. For the conditions, I think the 5 did pretty well.

They talked about the progress we’re making in rebuilding the mobile apps as actual social networking apps. Our apps were originally designed as calendars only; we bolted on social and group features one by one, muddling navigation on Android in the process. This project has been about producing a coherent navigation scheme that can better expose current features and make some room for personalized content to come.

If slides are posted I’ll link to them, but you’ll also be able to install the new app yourself in the next few weeks.

### Paul Chiusano

#### The Apple TV remote has a terrible UX

The Apple TV remote looks pretty but has an irritating UX:

• The up/down/left/right control is a single continuous circular ring. Thus, there is no tactile feedback as to when you've moved from e.g the 'up' region to the 'right' region. If I try to move too quickly between directions, I often end up pressing the wrong button.
• Compounding this problem, there is a button which occupies the entire center of the up/down/left/right ring whose function is 'select'. There is zero bevel separating the outer ring from this center button, so again, no tactile feedback whatsoever, and I routinely (accidentally) select something when I'm really trying to move up/down/left/right. This is irritating in a First World Problems sort of way, especially since there's a noticeable delay in opening the (accidentally selected) next screen, and in navigating back. (And for some reason, the back button is labeled 'menu', but let's ignore that for now.)

To compensate, one has to either be overly deliberate or look down at the remote, which makes using the device more of an awkward, tiptoeing experience, rather than a natural extension of my hand. And if you've ever used the remote, now that I'm pointing it out, you're probably realizing you've had the same problem.

Unlike a trackpad, a remote like this is fundamentally a discrete input device, like a keyboard, which have bevels on the home keys for good reason. Making the remote feel smooth and continuous is inappropriate and hurts usability.

We can debate how best to fix this particular product. But I have a more fundamental question--how did this get into production? I made the above observations after about 2 minutes of using the device. I am quite sure the geniuses at Apple could have designed an equally pretty remote without these usability problems. (Personally, I'd remove the center button and add a bevel or channel separating the up/down/left/right regions.) Form has to follow function at some point. Was this just a case of brainlessly following some design rules ("prefer smooth to bevels") without actually thinking about the consequences for the user?

In many ways, good design is not hard. All you have to do is pay a modicum of attention.