praeclarum @praeclarum - Tumblr Blog | Tumgag (2024)

Posts

praeclarum

Jul 20, 2018

Code Prediction with a Neural Network

TLDR; I used Python to create a neural network that implements an F# function to predict C# code. The network was compiled to a CoreML model and runs on iOS to beused in my app Continuous to provide keyboard suggestions.

The Problem

My .NET IDE Continuous runs on iOS instead of the normal desktop environments programmers are used to. This isn’t so much a problem since mobile devices have plenty of horsepower these days, but it does present a few human interface challenges - the biggest of these is entering code.

My desktop computer features a 108 key keyboard while the iPad’s on-screen keyboard features 36 keys - quite a difference! You can also get hardware keyboards for the iPad that feature 64 keys - still quite a ways from 108.

To ease the problem of code entry, Continuous has always shipped with a“keyboard accessory” that gives access to lots of missing characters used while programming.

I made this list of keys by scanning a bunch of code and seeing the most popular characters used. I sorted that list by popularity and added some order to it so it wouldn’t seem too random to users.

This was a good start but I was never happy with it - my biggest complaint was that it had no awareness of context and would always show the same list. This meant you usually had to scroll it a bit to find what you want. It also didn’t help you with keywords and multi-character tokens such as =>.

I wanted a new suggestion engine that tried to guess what you intend to type next in order to ease coding.

A Neural Solution

I’ve spent the last year deep diving into neural networks and machine learning in general. I find it be a refreshing alternative to the rigid confines of programming while also being a whole new and unexplored solution space to wander around in.

I’m not the only one enchanted by this new tech - Apple has pushed ML forward with its CoreML library that ties to its hardware promising efficient execution. It’s liberating to know that my mobile devices are now powerful enough to run very sophisticated networks. This also means that I have an easy path to create networks and write apps that use them - it’s an exciting time!

The decision to use a neural network to solve my code prediction problem was an easy one given all of this. Now I just need to choose what kind of network to use.

Sequence prediction is a classic problem in neural networks these days. The idea is that if you learn patterns in a sequence, then you can start predicting that sequences (extrapolating). Sequences can be letters of a natural language, samples of audio, stock values (just kidding, don’t go down that dark path), or, hmm, bits of code.

The current favorite network architecture to use for sequence prediction is a Recurrent Neural Network (RNN). These puppies are special because they have an intrinsic understanding of a sequence of events made available by an internal memory. Most neural networks are strictly“feed-forward” - data goes in one end and comes out the other. In RNNs, however, have internal feedback - data comes in one end and can get captured by the memory of the RNN. This memorized data can then be used by the next prediction. It’s quite sophisticated, and with enough horsepower, can do amazing things.

So I have a well defined problem, and I even have a solution. Now I just need to code all that up.

Training Grounds

Neural network libraries are a dime a dozen these days. The trouble is that they all use different languages, slightly different vocabulary, different file formats, run on specialized hardware, and only feature the barest of documentation. It’s exciting! To add to the mix, every cloud vendor seems eager to sell off time on their expensive GPUs. Using these vendors requires learning their own proprietary ways of executing NN libraries and fun mix of APIs. They are usually cheaper than going out and buying hardware, but it’s exhausting...

To that end, I took a very conservative approach to building my network. I decided to use:

Python as the programming language

Keras as my high-level NN library

PlaidML as the execution engine for training

Training on local hardware (iMac Pro with anAMD Radeon Pro Vega 56)

I’ve used Python on and off throughout my career and am comfortable with it. But everyone who knows me is probably asking why I didn’t use C# or F# - my preferred application development languages. Quite simply, there are no .NET libraries that can take advantage of Mac hardware. Most of the .NET libraries are either CPU bound (nope, not even bothering) or only run accelerated on Windows (inconvenient for me).

The other reason to use Python is that it is what the rest of the NN community uses. This stuff is hard and I’m constantly googling. Translating from Python to C# is exhausting and, from an engineering perspective, pointless. That said, I do hope .NET NN libraries mature and they will be a viable option in the future.

Keras is a nice high-level API for constructing and training networks. It’s“high-level” because it abstracts the execution engine from the model definition and because it’s API is very readable. Also, the author of the library, François Chollet, has written a book called Deep Learning with Pythonthat I absolutely adore. He gives clear explanations and lots of examples of lots of different types of networks. Normally Kera uses Tensorflow as its engine, but I’m not using that. Instead, I use OpenCL through PlaidML.

And now a shoutout for PlaidML - this sh*t is hot. Most NN libraries specialize for NVIDIA hardware (through Tensorflow). This monopoly is gross and quite burdensome to Mac users and anyone else not using NVIDIA devices. Not long ago, there were no hardware accelerated NN libraries for Macs...

Today, Apple is doing well on the model execution front, but has only made small inroads on model training.

Enter PlaidML. The library enables you to code standard Keras networks and train using a variety of hardware. It enables you to accelerate training and prediction on Macs, and I’m in love with it. If you’re a Mac or Windows user and don’t conform to the NVIDIA hegemony, then I suggest you give it a look.

Training Data

So just how do you predict what code comes next? It’s not a trivial problem and I think that there are a lot of ways you could present this problem to a NN.

When I first worked on this problem, I did everything at the character level. I just fed the network code file after code file and told it: learn C#. Here is some of the silly code it generated:

https://twitter.com/praeclarum/status/985575617310539776

It’s fun to see it hallucinating variable names and data structures. But there were a couple problems with this network:

It needed a pretty big history to do its job well - the version above used 80 preceding characters to make its prediction. When running on device, you have to execute the network for each history point and therefore execution speed is proportional to the amount of history. While CoreML is fast, this is asking a bit much in 2018.

The network is quite big. Because it has to learn (1) the syntax of C#, (2) style rules, (3) variable and type naming rules, (4) even some semantics, the network had to grow and grow to do its job.

It would generate doc comments that were nonsensical but still funny.

It would get lost generating string literals. This is easy to work around but was a hilarious flaw. Learning all the things that we put in our strings really taxed this network. This was also a problem for numbers and any other literals.

It was heavily biased towards my style of coding :-) It would use my kind of variable names, my whitespace formatting (honestly it could never choose between tabs and spaces either), and my libraries.

I decided that I’m asking a bit too much out of the net and that I would greatly simplify its task. This would result in a smaller model that I could more confidently train and was slightly less biased.

I decided to switch to“token types” as an item in a training sequence. I used Roslyn’s CSharpSyntaxTree to parse thousands of files and create sequences of token types.

Such a stream looks something like this:

UsingKeyword IdentifierToken SemicolonToken NamespaceKeyword IdentifierToken OpenBraceToken PublicKeyword ClassKeyword IdentifierToken OpenBraceToken PublicKeyword VoidKeyword IdentifierToken OpenParenToken CloseParenToken OpenBraceToken CloseBraceToken CloseBraceToken CloseBraceToken ...

This corresponds with the code:

using X; namespace X { public class X { public void X() {} } }

As you can see it loses the concept of whitespace, loses all knowledge of variable names (identifiers), and doesn’t learn literals - it’s quite dumb in fact. It is still however learning the syntax of C# and common patterns that developers use.

While dumb, this is exactly the data that my keyboard wants and I decided that this is the training data that would be used by the network.

In all, I generated a sequence of 1,000,000 tokens for training and validation. There are about 150 different types of tokens that it will have to learn.

Do I Even Need a Network?

You might stop and wonder (as I did) that if I simplified the training set so much, do I even need a neural network? Wouldn’t a lookup table be enough?

The final network I built uses a history of 8 tokens to decide what the next token will be. How large of a lookup table is this?There are 150 token types and the lookup is 8 of these meaning there are 150^8 permutations. That’s a lookup table of 256,289,062,500,000,000 entries. I guess a naive lookup table is out of the question...

Are there other techniques? Sure, XGBoost (a decision tree) is a popular alternative to neural networks. It does very well in competitions and provides a completely different look at the data.

Unfortunately, my ML knowledge is specialized at this point to nets, so I’m just going to stick with what I know.

Building and Training

I constructed the most basic RNN I could use a single layer of LSTM (special nodes with the memory mentioned above) to learn patterns in the data and a final fully connected layer (Dense) to do the actual prediction of the next token.

Even with such a simple architecture, there are a lot of knobs to turn in order to train the net well. These are referred to as“hyperparameters”. For this model, they include:

The amount of history to provide the network per prediction

The number of nodes to use in the recurrent layer

The types of nodes to use in the recurrent layer (LSTM, GRU, Conv1D)

The activation functions of the layers

How much data is provided per training session (epoch)

How many epochs to train for

If I was a good engineer, I would write a script that varied these hyperparameters, automatically trained nets, and eventually reported to me the best combination. Instead, I tweaked them by hand until I found a good combination/was exhausted.

While training, you’re also balancing the size (and therefore speed) of the network vs its capabilities or accuracy. As this is my first network that I plan on shipping in an app, I wanted to stay conservative and tried to stay below 200 KB total model size.

In the end I trained a network with 67% accuracy and that required 8 history samples to make its prediction (using 16 samples only got it up to 69%). Here’s its summary:

What does 67% accuracy mean? Is it good?

The answer is, I don’t really know. :-) All I can say for sure was that it’s about as good as I could get this network (and those similar to it) to perform.

Let’s take a guess at what it means by considering some other accuracies. If we randomly guess the next token then we have a 1/150 chance that we’re right, or 0.7% accuracy. OK, so we’re better than random.

What would 100% accuracy mean? Well, I kinda think it would mean we’re out of jobs. If it correctly guesses every next token then that’s darn close to solving the original program’s problem and being a coder itself. Thankfully, it would have to understand so much that I don’t think it’s achievable today andour jobs are safe.

One other thing to consider is that this accuracy represents the accuracy of the net’s best guess. But, the net calculates a probability for each token - that is, it makes a second best guess, a third, and so on. My app is going to display these guesses along with the best guess, and, chances are, the one you want will be in that list.

So is 67% good? Eh, it’s not bad! But I think the only real way to know is to interact with it and use our own judgements.

Integration into an App

To use the trained model on iOS, it first needs to be converted to a CoreML model file. Apple makes this easy by providing a Python package called coremltools to convert from Keras models to their own. This produces a file called, in my case, CSharpPredictor.mlmodel

My IDE is written in F# using Xamarin. Xamarin has excellent support for importing models into C# projects, but things are a bit rougher in F# and a bit of extra code needs to be written. That extra code involves loading the model file and thencompiling it to prepare it for use.

I created a function with a simple interface to act as the entry point, or bridge, to the neural network:

predictNextToken: SyntaxKind[] -> (SyntaxKind * float)[]

This means that it’s a function that takes an array of SyntaxKinds (what I keep calling tokens) and produces and array of guesses and their probability. (The actual code also returns other data needed by the IDE.)

https://gist.github.com/praeclarum/7b5029656962864936d7667ae2f4a624

The code is a little long winded because it needs to manage the memory of the RNN and because I need to play games with NSDictionary due to the missing binding (in C#, this code would be cleaner).

This code also filters out any guesses with a probability less than 0.0001 just to throw away the predictions that are very unlikely. The list actually gets decreased further in the UI code.

An important but easy part to miss in the above code is when it converts the token to a vector that’s usable by the model (the function CSharpPredictor.kindToVector). Neural networks don’t understand categorical data natively so we convert each one to a “one hot vector” which is terrible binary encoding that these things love.In order to keep the network in sync with my code data types, I generate a code file that contains this mapping from SyntaxKinds to MLMultiArray along mappings to the literal text to insert.

Every neural network you build is going to need awrapper function like this - something that bridges the gap from the bytes and datatype traditional programming world to the dataflow connectionist world of neural networks. The complexity of the function depends on how similar your inputs and outputs of the model match the inputs and outputs of the program. In this case, it was a pretty easy function to write.

Putting it all Together

I now have a trained network, and I have code to execute it. The last step is to wire it into the UI so I can finally interact with it.

This was pretty easy given how the IDE works. Whenever the user edits text, the document gets parsed by Roslyn. The app then takes the current cursor position and scans the syntax tree backwards collecting previous tokens. Those tokens get passed to the predictNextToken function above to produce the ranked predictions. (This all happens on background threads to keep the UI snappy.)

Those predictions are passed to the keyboard accessory which is just a UICollectionView. And that’s that; as you move the cursor around, the predictions appear above the keyboard.

Keep an eye on the black bar above the keyboard and note how it changes based on the cursor position. It’s not perfect, but its top couple matches are usually right.

Future Improvements

For now, I kept the code completion window separate from this predictor but I can imagine fusing them at some point. In early versions I did present the predictions in the floating window, but that got obnoxious.

There’s certainly room for the network to grow. I am considering change the inputs yet again to see if I can get decent variable name prediction. Right now, when the network detects with 99% certainty that you need to enter a name, then it should make some guesses. I would also like to add back in some whitespace support - for example, knowing when to start a new line - but then it’s making stylistic decisions.

I am also playing with giving the network more context to work with. I can use Roslyn’s syntax tree to provide the network with a lot of direct context that it won’t need to learn on its own. This should simplify the network and make it more accurate.

In all, I think that there are a lot of ways to improve this and make little neural assistant programmers a possibility. But for now, I just love seeing it say“you really want to put an if statement here”.

praeclarum

May 31, 2018

Introducing fuget.org

Have you ever wondered what exactly is in a nuget to see if it’s right for you? You read the description, you like the name, but, if you’re like me, you probably ended up in GitHub reading the source code to decide if you want to use the library.

I love nuget and the wonder of diversity of packages in it, but I didn’t love browsing it - it was just too much work to see what a library was really offering. I decided that I wanted (and needed) a better package browser streamlined for this exploration process.

Today, I’m pleased to announce fuget.org - a new site for browsing nuget packages. It is my best attempt to build a tool to help you both discover new packages and to dig in deep to learn them once found.

Fuget.org shows you most of what nuget.org shows you, but adds these exciting features:

Supported Framework List

Have you ever wondered if the library your using has been customized for a certain platform? Have you wondered if it will work on your platform at all?

This doubt is removed by displaying - in full technicolor - all the frameworks that the library supports.

They’re color coded so you can see at a glance:

Green libraries are .NET Standard and will work everywhere

Dark blue libraries are platform specific

Light blue libraries are for full .NET and Mono only

Yellow libraries are old PCLs that we’re all trying to forget

This solves a general pet-peeve of mine of wanting to know exactly what code I’m getting for my platform. Usually, thankfully, most libraries are .NET Standard. But for those that aren’t, it’s important to know if it will work for you.

Related to this, it’s often enlightening to learn the differences between different APIs on different platforms. When you click a library, fuget.org reads its assemblies and allows you explore its API. You can use the framework buttons to see how those APIs differ between platforms. This can let you see, for example, how Xamarin.Essentials implements secure storage on iOS vs secure storage on Android.

API Explorer

This is the big feature that I’m most excited about. All the classes in your assembly are browsable - you can drill down starting from framework, to assembly, to types.

When you click a type, its members and its documentation are displayed for you. Each member’s declaration is shown along with a short summary built from the XML docs library developers lovingly write. All of the types in the declaration are cross-linked throughout the site - even between packages - to help you explore the API.

There is even an API search - just start typing in the little box to find links to whatever types or members you’re interested in. When you click a result, the documentation for that item will be shown. If you click a type whose documentation is provided by docs.microsoft.com, then you will be redirected there instead.

If the library is open source, then a Code tab will appear that lets you browse the source of the library.

API Diff

In the modern world of release many, release often library development, it’s sometimes hard to tell what’s changed between different versions of a library.

To solve that problem, fuget.org can automatically generate API diffs between any two versions of a nuget package.

It looks at the public API - all the types and all their members - in both versions of the library and shows you what was added and removed through a cute little hyperlinked diff.

I recently used this to see how Xamarin.Forms 3.0 is different from 2.5. It’s fascinating to see how libraries have matured over time.

Easy Entrypoints

The URLs of fuget.org mimic nuget.org so if you ever find yourself on nuget.org but wish you were on fuget.org, just change that “n” to an “f” and you’ll be all set.

There is also a package search at the top that gives you the same results as your favorite IDE would give you.

Lastly, the homepage remembers your most commonly visited packages and keeps a short list for you. This is all stored in your browser’s local storage and never transmitted to the server - so your obsession with JSON libraries will remain a secret.

And More

I built this site as a tool for myself but am glad to finally share it with everyone. This thing has been a real work of love and I hope it helps you in your day to day work.

Colophon

I built the site using ASP.NET Core on a Mac with Visual Studio for Mac using Razor pages. It is hosted on Azure.

praeclarum

May 21, 2018

Calca in the Microsoft Store

Calca is my crazy symbolic math calculator/markdown editor designed specifically for mad scientists. Today, I am very pleased to announce that Calca 1.5 is available in the Microsoft Store!

This is exciting for me on two fronts.

First, this is a great update to the Windows version of Calca that includes plotting and high DPI support. Plots make it easy to visualize functions and calculate derivatives while the high DPI support just makes the app look good. This version also includes a large number of fixes and I hope you love it!

Second, and the point of this blog post, is the fact that the app is finally available in the Microsoft Store making it super easy for all Windows 10 users to try it.

This is a change from my previous distribution method of hosting the app on my own store. I never liked that approach for a variety of reasons - updates were hard, visibility is hard, users had to trust my payment processor, etc. etc. now that it’s in the Microsoft Store,I hope to reach more people and simplify the update process.

The trick was that I designed Calca for Windows to work on Windows XP and does not run on Microsoft’s UWP platform - previously a prerequisite to being in the store. The good news is that Microsoft is now allowing such apps in the store thanks to a program called Desktop Bridge.

If you’d like to hear more about it, please continue reading. Otherwise, you can go get a free trial to see if it’s for you.

All Hail the Desktop Bridge

Microsoft heard all us Win32 programmers begging to be the in Microsoft Store and launched“Project Centennial” - a great program with an insulting name. So they renamed it “Desktop Bridge” and all the world’s programmer’s rejoiced.With The Bridge, you can get your .NET WinForms apps into the Microsoft Store and onto millions of Windows 10 machines.

I’ll be honest, I’m still not sure what The Bridge is. The good news is, you can trust it without knowing what it is because it doesn’t modify your app. Instead, it wraps up your app’s executable and all its support files into a standard APPX package.

It’s as easy as downloading the DesktopAppConverter from the store and running this command in PowerShell:

DesktopAppConverter.exe -Installer C:\LocationOfAppAndDependencies -AppExecutable App.exe -Destination C:\DesktopAppConverterOutput -MakeAppx

Well, that’s the general idea. Unfortunately, the documentation for the converter is a bit sparse and it took me some trial and error to learn the full set of arguments you need to pass it to work. Most of these arguments simply match what’s displayed in the Microsoft developer hub - however they have slightly different names creating a bit of confusion. Incase you ever find yourself doing this, here’s a little guide:

-PackageDisplayNameis the name of your app in the store.“Calca” for me.

-PackageName is generated by Microsoft and is something like“1B4DF00D.Calca”

-PackagePublisherDisplayName is you, but it better match the name in the developer hub. For Calca, it’s“Krueger Systems, Inc.”

-Publisherthis is the GUID that Microsoft lovingly calls you in bed.“CN=X0X0X0X0-X0X0-X0X0-X0X0-X0X0X0X0X0X0″

-Version is in class 3-dot form:“1.5.0.0″

Papers Please

Unfortunately, Desktop Bridge apps are still not fully supported by the Developer Hub UI. Instead, you are going to have to fill out some web mail template forms, wait a few days, do some other banal task, wait a few days, sign another agreement, wait a few days, you get the idea.

You will also get warnings about your app being special (I know!) and how certification will take a week instead of the usual couple hours on the store. This process also involves more emails and more clicking. It’s fun.

Overall, the process went very smoothly.

I was hoping for a more streamlined experience. None of this process is hard, it’s just very bureaucratic. And it does leave me wondering how much effort updates will be... but let’s not think about that!

A New Age for Updates

My biggest regret with selling Calca directly is that I didn’t establish a good update path for customers. My policy was“email me and I’ll send you an update for a year”. This policy is bad both for my customers (I’m making them use email!) and myself (now I have to read email!).

The net result of this was the Windows version of Calca was falling behind the Mac and iOS feature sets. Terrible!

Thankfully, now that the app is in a proper store, I can keep it up to date with iOS and Mac cousins.

That’s it! Thank you for reading and I hope you have fun doing some math!

praeclarum

Mar 15, 2018

Ooui.Wasm - .NET in the browser

I’ve been obsessing over my little .NET web framework Ooui and am excited to announce that it is now capable of running fully in the browser as a web assembly. This means that Xamarin.Forms apps can run completely in the browser - without a smart server - giving .NET developers even more reach and options for distributing their apps. Try the Xamarin.Forms XAML editor demo online or read the getting started guide if you want to try it yourself.

Demo

This simple XAML Editor running as a web assembly demonstrates the power of web assembly nicely. It demos Xamarin.Forms running in the browser and shows that all logic is working by enabling you to edit the displayed XAML.

I'm hosting it on S3 to drive home the point that this app is distributed as just a bunch of static files - all execution is done in the client browser. Once the app is loaded, you can turn off your network and everything will keep working. You can also inspect the HTML source to see that it's just a shell of an app.

You may be asking how this works. Web assembly is the latest browser tech to enable non-JavaScript languages to execute in the browser. In the past, the only way to create HTML5 web apps was to write JavaScript or to compile your app down to JavaScript (like Netjs and Fable do). This obviously works but has limitations because JS wasn't designed to be a low-level target language. Fortunately, the browser cabal that runs the internet has recognized this shortcoming and have created web assembly - a proper low-level target for programming languages to compile down to.

Web developers are no longer forced to work with JavaScript and thanks to the amazing work of the mono team, we can run full .NET code (.NET Standard 2.0) in the browser! This even works on mobile browsers. Wild!

Easy as 1-2-3

You can now create a .NET web assembly app with just 3 commands:

dotnet new console dotnet add package Ooui.Wasm dotnet build

This will create web assembly build of your app ready to be run in any modern browser. The build is tucked away in a dist subdirectory of your bin directory.

Now that app is quite boring and will just print“Hello World!” to the console instead of displaying the words. To fix this, we can edit the program:

UI.Publish("/", new Span("Hello World!"));

That one line of code will add a span to the HTML document to display the greeting. Every Ooui.Wasm app declares its initial UI by publishing that root element. Of course, the app is free to change things afterward. Check out the getting started guide for a longer example.

Xamarin.Forms in Web Assembly

Since Ooui.Forms already implements a backend for Xamarin.Forms, Xamarin.Forms works out of the box in web assembly!

I wrote a getting started guide for Xamarin.Forms that walks you through a complete example.

Pros and Cons of Web Assembly

Let’s compare writing an app with web assembly to writing a more traditional web app with Ooui.

Traditional

Pro: Uses billion-year-old HTML that works everywhere in the known universe. Google can read it, bots can read it, and it probably might work in Internet Explorer.

Con: You need to run a web server that can execute code and that server will have to scale as your users increase.

Pro/Con: Data is shared by default between users (since it’s all coming from the server)

Web Assembly

Pro: No smart server needed - you can host your app on a static web server such as Amazon S3, Azure Blob, another CDN, or a low-power device.

Pro: Apps can be cached to be able to run in disconnected scenarios.

Con: Relies on fancy new support in modern browsers. Fortunately support is ubiquitous today, but this won’t be working on Windows XP.

Pro/Con: Data is private by default between users (since everything is running locally)

The last items were marked pro/con because it really depends on your app whether data between users should be easily shared. If you’re making a social network, then yes you want easy sharing. If you’re making a private journaling app, maybe local is right.

Comparison with Blazor

Blazor is an amazing product experiment from Microsoft that also uses mono's web assembly support. Instead of exposing the DOM and classes and objects as Ooui does, it uses Razor templates composed through "components" to build your UI.

You can think of Ooui.Wasm and Blazor as application frameworks running atop a common runtime (mono wasm). Ooui was designed to feel like classical native object oriented UI development which enables it to run even higher-level app frameworks like Xamarin.Forms. Blazor was designed to feel like modern React-style web development where you're writing HTML templates. Pick your poison.

Big Thanks

It’s surprising how little of Ooui had to change in order to work as a web assembly. This is thanks to the amazing effort of the mono team. I want to especially thank Rodrigo Kumpera for helping me get everything working.

praeclarum

Nov 15, 2017

Xamarin.Forms on the Web

TLDR: I implemented a web backend for Xamarin.Forms so that it can run in any browser. It achieves this without javascript recompilation by turning the browser into a dumb terminal fully under the control of the server (through web sockets using a library I call Ooui). This crazy model turns out to have a lot of advantages. Try it here!

A Need

I have been enjoying building small IoT devices lately. I've been building toys, actual household appliances, and other ridiculous things. Most of these devices don't have a screen built into them and I have found that the best UI for them is a self-hosted website. As long as the device can get itself on the network, I can interact with it with any browser.

There's just one problem...

The Web Demands Sacrifice

The web is the best application distribution platform ever made. Anyone with an internet connection can use your app and you are welcome to monetize it however you want. Unfortunately, the price of using this platform is acuquiecense to "web programming". In "web programming", your code and data are split between the client that presents the UI in a browser and the server that stores and executes application data and logic. The server is a dumb data store while the client executes UI logic - only communicating with the server at very strategic points (because synchronization is hard yo). This means that you spend the majority of your time implementing ad-hoc and buggy synchronization systems between the two. This is complex but is only made more complex when the server decides to get in on the UI game by rendering templates - now your UI is split along with your data.

Getting this right certainly is possible but it takes a lot of work. You will write two apps - one server and one client. You will draw diagrams and think about data state flows. You will argue about default API parameters. You will struggle with the DOM and CSS because of their richness in both features and history. You will invent your own security token system and it will be hilarious. The web is great, but it demands sacrifices.

(And, oh yes, the server and client are usually written in different languages - you have that barrier to deal with too. The node.js crew saw all the challenges of writing a web app and decided that the language barrier was an unnecessary complication and removed that. Bravo.)

Something Different

I was getting tired of writing HTML templates, CSS, REST APIs, and all that other "stuff" that goes into writing a web app. I just wanted to write an app - I didn't want to write all this boilerplate.

I decided that what I really wanted was a way to write web apps that was indistinguishable (from the programmer's perspective) from writing native UI apps. If I wanted a button, I would just new it up and add it to other UI elements. If I wanted to handle a click event, I wanted to be able to just subscribe to the event and move on. What I needed was a little magic - something to turn my simple app into the server/client split required by web apps.

That magic is a library I call Ooui. Ooui is a small .NET Standard 2.0 library that contains three interesting pieces of technology:

A shadow DOM that gives a .NET interfaces to the web DOM. It has all the usual suspects <div>, <span>, <input>, etc. along with a styling system that leverages all the power of CSS.

A state-synchronization system. This is where the magic happens. All shadow DOM elements record all the operations that have ever been performed on them. This includes changes to their state (setting their inner text for example) but also methods that have been called (for instance, drawing commands to <canvas>). This state can then be transmitted to the client at any time to fully mirror the server state on the client. With this system, all logic is executed on the server while the client renders the UI. Of course, it also allows for the client to transmit events back to the server so that click events and other DOM events can be handled. This is the part of Ooui that I am the most proud of.

A self-hosting web server (with web sockets) or ASP.NET Core action handlers to make running Ooui very easy. If I want to self-host a button, I simply write:

var button = new Button { Text = "Click Me!" }; button.Clicked += (s, e) => button.Text = "Thanks!"; // Start a web server and serve the interactive button at /button UI.Publish("/button", button);

I can do this from any platform that supports .NET Standard 2. I can run this on a Mac, Linux, Windows, Raspberry PI, etc.

Alternatively, you can host it on an ASP.NET MVC page if you want it up on the internet:

public class HomeController : Controller { public IActionResult Index() { var button = new Button { Text = "Click Me!" }; button.Clicked += (s, e) => button.Text = "Thanks!"; // Return interactive elements using the new ElementResult return new ElementResult(button); } }

Pretty neat huh?

But one more thing...

Xamarin.Forms Support

The DOM is great and all, but what do .NET developers really love when you get right down to it? XAML. This little serialization-format-that-could has become the standard way to build .NET UIs. Whether you're writing a Windows, UWP, or mobile app, you expect there to be XAML support.

So I made XAML work on the web by implementing a new web platform for Xamarin.Forms. Now, any of your Xamarin.Forms apps can run on the web using ASP.NET.

Xamarin.Forms was not at all on my radar when I was building Ooui. Eventually though I realized that it was the perfect basis for a web version of Forms. I thought the idea to be a little silly to be honest - web developers love their CSS and I didn't think there was much point. But one day I heard someone ask for just that feature and I thought "now we're two".

I had never written a backend for Xamarin.Forms but found the process very straightforward and very easy given its open sourceness (e.g. I copied a lot of code from the iOS implementation :-)). There's still a bit of work to be done but Xamarin.Forms and Ooui are getting along like long-lost cousins.

Animations work, pages and layouts work, styling works (as far as I have implemented), and control renders are currently being implemented. Fonts of course are an annoyance and cause a little trouble right now, but it's nothing that can't be fixed.

Once I got Xamarin.Forms working on the web I realized how wrong I was for thinking this to be a silly technology. Writing web apps with the Forms API is a real pleasure that I hope you'll get to experience for yourself.

Now that I am officially releasing Ooui, I want to work on a roadmap. But for now I mostly just want to hear people's opinions. What do you think about all this? What are your concerns? How do you think you could use it? Do you think it's as cool as I do? (Sorry, that last one is a little leading...)

praeclarum

Sep 27, 2017

"Hotdog or Not" Using Azure Custom Vision, CoreML, and Xamarin

TL;DR I used Microsoft’s Custom Vision service to train a CoreML model and wrote an iOS app in Xamarin to execute it in less than two hours. It has a loose tie-in with a popular television show. Code on GitHub. You can hear James and I discuss this on Merge Conflict.

Machine Learning Is Easy Now?

Microsoft released a webapp called Custom Vision as a part of their Azure Cognitive Services. While Microsoft has been a player in machine learning for awhile now, this new service is special for one reason: it can export CoreML models.

CoreML is a new feature of iOS 11 that enables apps to execute neural networks (and other ML models) locally on the device. While this has always been possible, Apple made execution easy.

All you need is a trained model and all of a sudden your app can do fancy AI tricks - all locally without needing a network connection and without sharing information with third parties.

The only trick is that you need to train a model. While there are certainly pre-trained models online that you can download, chances are they won’t do exactly what you want.

To train a CoreML model, you would follow the Keras tutorials and examples. Keras, while amazingly powerful, is neither easy to learn nor easy to use even after you’ve learned it. Eventually your skills increase and you can use it, but it does take quite some effort. It also takes some money - training deep networks is slow on standard PC hardware. Soon you’ll be buying fast GPUs or paying for virtual machines out in the cloud.

Now with Custom Vision, Microsoft has made training easy. Instead of learning Keras and finding some fast GPUs to run it on, you can just use Microsoft’s web app. They use Azure’s infrastructure to find machines and, most importantly, they don’t require that you learn how to train networks manually. Instead, there is a GUI that holds your hand, lets you experiment, and keeps everything organized for you.

In this version, they made training easy only for a particular kind of model: recognizing the dominant object in an image. This is a classic task for CNN based neural networks to solve because they’re really good at it and it’s a useful capability with numerous real-world applications. It’s a great choice for Microsoft to lead with this type of model.

So that’s the hype. But does it work?

I tried training a new model and writing an app to execute it to find out. Since I wasn’t confident in my success (and perhaps had too many beers while extolling the virtues of ML to friends), I decided to make it an easy problem: hot dog or not. The app would take a picture and decide if the dominant object in the scene is a hotdog. Yes mom and dad, I’m really putting my degree to use.

I wrote my experience below as a tutorial for doing these kinds of trainings yourself. If you follow along, you’ll be able to write an iOS ML app yourself.

Step 1. Gather Training Data

No matter how skilled you are as a data scientist you will always be terrible at one thing - gathering training data.

We need two sets of images to train our model: one set of hotdogs and another of not hotdogs.

Sounds easy right? Well sure it is until you start actually doing it. Quickly you’ll run up against questions and troubling biases:

Is a drawing of a hotdog a hotdog? (Aristotle would be proud.)

Are two hotdogs a hotdog? (What about 3?)

Should I have an equal number of hotdogs with mustard as hotdogs with ketchup? (Should you bias the network towards your a priori view of the world? Are your biases personal or universal?)

Should I have an equal number of images of hotdogs and not hotdogs? (Since nearly all objects in the universe are not hotdogs, just how strong should our bias be?)

Why do people dress up their dogs as hotdogs?

The list goes on and on. You will confront your biases when collecting training data. Those biases will then be passed onto the network you train. You’ve been warned.

Thankfully the nature of this app precludes the need to do much soul searching for biases towards hotdogs. So I made some executive decisions:

No, drawings are not hotdogs

Yes, many hotdogs are a hotdog

Bias towards ketchup because it’s better

Bias towards not hotdogs since people love to try to trick these kinds of apps

Just accept it

Data collection takes a long time too even with Google’s assistance. After an hour of dragging and dropping, I ended up with 75 images of hotdogs and 175 images of not hotdogs. (I could have written a script but we all know how deep that rabbit hole is.)

For anyone who’s trained a CNN before, you know that this is a very small training set. Even more absurdly, Custom Vision only requires 5 images of each type. What’s going on here?

While Microsoft doesn’t explain the details, my guess is that they are fine-tuning a model already trained on images. The idea is that you take a trained model and then re-train only a part of it on new data. The hope is that the majority of the model is general purpose and can be reused. This saves training time and also reduces the required training set size. I’m not sure if this is what they’re doing, but I’m relieved that I don’t have to gather tens of thousands of images.

Of course with all ML, more data is better. But my fingers were getting tired. (Also, Custom Vision is currently capped at 1,000 training images.)

Step 2. Create a Project

You will want a project for each network you train. Projects hold all of your images and your trained models. You will end up training multiple times because it’s good to experiment with different training set sizes and compositions.

Create an account on https://www.customvision.ai. It’s free!

Create a New Project.

I named the project MyHotDogOrNot, gave it a banal description, and then chose the domain General (compact).

Domains are starting points for your trained model. If you are using cognitive services as a web API, then you should choose whichever domain most closely matches your training data.

General (compact) is the only domain that supports CoreML export so we must choose that. Hopefully Microsoft will allow us to use the other domains in the future in order to improve accuracy.

Step 3. Create Tags

When you’re viewing your project, you will see a list of tags. We need to make this list match the types of training images gathered.

Click the + at the top of the Tags list.

Create two tags: hotdog and not-hotdog.

When you’re done, you’ll see a list of your tags. The (0) means there are no images yet associated with the tags.

Step 4. Upload Training Data

You can upload all the images with the same tag using just one command.

Choose Add images from the toolbar and select all of your hotdog images.

Add the tag hotdog.

Click Upload files.

Repeat for the tag not-hotdog.

Step 5. Train the Model

So let’s train this thing already.

Click the big green Train button.

Go to the Performance tab and wait for your “Iteration” to finish.

When training is complete you will see the performance screen with the overall Precision and Recall of the model. In my case, I get slightly better results detecting not-hotdog than hotdog but they’re both great numbers so why fret.

Of course, these numbers don’t mean your network will work in the real world since the performance is measured against images you hand selected (with all your gross human biases). That said, you can use them as rough indicators of the relative performance of one training iteration against another.

Step 6. Export the CoreML Model

Finally, we can retrieve the CoreML file.

Click Export from your iteration’s performance screen.

Choose iOS 11 (CoreML) from the platform selection screen.

Click Export.

Click Download.

You will now have a fancy .mlmodel model file. Rename it to something nice.

If you open it with Xcode you will see its inputs and outputs.

We can see that its input is a 227 x 227 pixel image named data and its output includes a classLabel string that will be the model’s best judgement and also a loss output that will give a closeness measure for each of our tags.

Step 7. Write an App

At this point we have a model file and just need to put a UI on it.

To keep the code to a minimum, I’m going to use the Vision framework to execute the CoreML model. This framework makes resizing images to our required 227x227 dimensions easy and also takes care of numerical and pixel format conversions.

I will also use ARKit to display the camera on the screen. This is most definitely overkill, but it greatly reduces the amount of code we need to write to deal with the camera.

First, create a new Single View app.

Modify ViewController.cs to add an AR view.

// In ViewController readonly ARSCNView cameraView = new ARSCNView (); public override void ViewDidLoad () { base.ViewDidLoad (); cameraView.Frame = View.Bounds; cameraView.AutoresizingMask = UIViewAutoresizing.FlexibleDimensions; View.AddSubview (cameraView); }

Perform the standard management of that view. This is all we need to get a live camera preview.

// In ViewController public override void ViewWillAppear (bool animated) { base.ViewWillAppear (animated); var config = new ARWorldTrackingConfiguration { WorldAlignment = ARWorldAlignment.Gravity, }; cameraView.Session.Run (config, (ARSessionRunOptions)0); } public override void ViewWillDisappear (bool animated) { base.ViewWillDisappear (animated); cameraView.Session.Pause (); }

Add the model to the resources section of your app.

Add code to load the model. Models need to be compiled before they can be loaded. If you have access to Xcode, you can pre-compile your models. Compiling on the device is pretty fast so we won’t bother with that optimization. (I do this loading in the view controller’s ViewDidLoad method but you should architect your app better by doing this work on a background task.)

This also includes code to initialize the Vision request that we will make. Requests can be used for multiple images so we initialize it once. When a request completes, HandleVNRequest will be called.

// In ViewController MLModel model; VNCoreMLRequest classificationRequest; // In ViewController.ViewDidLoad () var modelUrl = NSBundle.MainBundle.GetUrlForResource ( "HotDogOrNot", "mlmodel"); var compiledModelUrl = MLModel.CompileModel (modelUrl, out var error); if (error == null) { model = MLModel.Create (compiledModelUrl, out error); if (error == null) { var nvModel = VNCoreMLModel.FromMLModel (model, out error); if (error == null) { classificationRequest = new VNCoreMLRequest (nvModel, HandleVNRequest); } } }

Add a tap handler that will respond to any taps on the screen (I like simple UIs). When a tap is detected, the Vision framework will be used to perform the model execution.

// In ViewController.ViewDidLoad () cameraView.AddGestureRecognizer (new UITapGestureRecognizer (HandleTapped)); // In ViewController void HandleTapped () { var image = cameraView.Session?.CurrentFrame?.CapturedImage; if (image == null) return; var handler = new VNImageRequestHandler (image, CGImagePropertyOrientation.Up, new VNImageOptions ()); Task.Run (() => { handler.Perform (new[] { classificationRequest }, out var error); }); } void HandleVNRequest (VNRequest request, NSError error) { if (error != null) return; var observations = request.GetResults () .OrderByDescending (x => x.Confidence); ShowObservation (observations.First ()); }

Finally, in ShowObervation we present an alert of the model’s best guess.

// In ViewController void ShowObservation (VNClassificationObservation observation) { var good = observation.Confidence > 0.9; var name = observation.Identifier.Replace ('-', ' '); var title = good ? $"{name}" : $"maybe {name}"; var message = $"I am {Math.Round (observation.Confidence * 100)}% sure."; BeginInvokeOnMainThread (() => { var alert = UIAlertController.Create (title, message, UIAlertControllerStyle.Alert); alert.AddAction (UIAlertAction.Create ("OK", UIAlertActionStyle.Default, _ => { })); PresentViewController (alert, true, null); }); }

And that’s it, we now have an app that can detect hot dogs (or not)!

You can find the complete source code on GitHub.

Conclusion

It’s great to see Microsoft and Apple technologies working together to make adding a powerful feature to apps easier. If you made it this far, you saw how I was able to build the app in less than two hours and I think you can see that it’s pretty easy to make your own ML apps.

If you enjoyed this you will probably enjoy listening to James Montemagno and I discuss it on our podcast Merge Conflict.

praeclarum

Feb 27, 2017

Fashionable REPL Prompts

I was writing a new language the other day and I thought, “this puppy needs a REPL”!

But before I could write one, I had to decide how it would look and behave. I mean, I knew the basics: take something in, execute it, then display the result. But how do you open the help? How do you handle multi-line input? Can I use terminal colors? What does the prompt look like?

To answer that last one, I took a quick survey of my favorite languages - turns out they've all coalesced to the > prompt, but there are some fun variations:

F#

> 2 + 3;; val it : int = 5

F# is my favorite language, but the REPL is a bit busy for me. First the language dictates this weird crying emoji (;;) in input and the result is always encumbered by val it : noise. But I still <3 you F#.

C#

csharp> 2 + 3 5

Much cleaner! No semicolons, and just the answer. Well C# does show its vanity a bit with its name announcement on each line - but heh, it deserves it.

Python

>>> 2 + 3 5

Elegant and bold at the same time. The >>> means Python, but you don't actually say "Python". So hipster, so cool. I'm sure I could copy >>> as remixing is hip these days; but no, I'd be trying too hard.

Ruby

irb(main):009:0> 2 + 3 => 5

OK, I get what you're going for here Ruby. Part of me even likes it. But no. Too much. I would expect this kind of complexity and technical jargon when logging into my refrigerator - but my dev environments should have a little more refinement.

On the plus side - Ruby outputs the answer in yellow. I'm totally stealing that.

Oh and geeze, they use the same symbol as Calca - so how could I not love that?

JavaScript

> 2 + 3 5

Look at you. Simple, reasonable, well thought out. It's like staring at an oil on canvas painting containing only the lowercase Helvetica a.

Conclusion

While we have standardized on > as the one prompt to rule them all, there is a fair amount of diversity as to what comes before it.

I'm a fan of simplicity and in the end, I went with C#'s vanity> prompt. Cause, like, I'm vain.

praeclarum

Jul 6, 2016

Continuous - C# and F# IDE for the iPad

Over the past six months I have been working on a new .NET IDE for the iPad, and today I am very pleased to release it on the App Store.

Continuous gives you the power of a traditional desktop .NET IDE - full C# 6 and F# 4 language support with semantic highlighting and code completion - while also featuring live code execution so you don't have to wait around for code to compile and run. Continuous works completely offline so you get super fast compiles and your code is secure.

Continuous gives you access to all of .NET's standard library, F#'s core library, all of Xamarin's iOS binding, and Xamarin.Forms. Access to all of these libraries means you won't be constrained by Continuous - you can write code exactly as you're used to.

Real Work, on the iPad

I love the iPad but was still stuck having to lug around my laptop if I ever wanted to do "real work". Real work, in my world, means programming. There are indeed other IDEs for the iPad: there is the powerful Pythonista app and the brilliant Codea app. But neither of those apps was able to help me in my job: writing iOS apps in C# and F#. I couldn't use my favorite languages on my favorite device and that unfortunately relegated my iPad to a play thing.

That realization produced this tweet last December:

I resolve to use my iPad Pro for software development in 2016.

— Frank A. Krueger (@praeclarum)

January 1, 2016

Well it took me a bit of time, but I finally have it: a .NET IDE on the iPad (and phone too!).

But it's not "just an IDE". I didn't want it to simply be sufficient - I wanted it to be great. I also thought it was a nice time to push the state of the art in .NET IDEs a tad.

For ages compiled languages like C# and F# have forced a sequential development loop on programmers: the Code-Compile-Run-Test loop. We code something up, wait for it to compile, then wait for it to deploy and run, then we get to test it.

I hate waiting for compilation and deployment so I designed Continuous to minimize those steps. It does this by eagerly compiling your code - never waiting for you to tell it when to start. It runs your code as soon as those compiles complete successfully and displays the results of that execution right next to your code. Now you can focus on the code and the results of that code instead of being distracted by all the silly machinery of a compiler and IDE.

The benefits of making compilation and execution fast have surprised me. My iPad has become my favorite place to write apps now.

The UI is visualized right next to the code that is building it.

I am no longer constrained by designers with their static view of the world - the UI objects in Continuous are live and interactive.

I can use real code files but still visualize objects out of them as if they were scripts.

I can focus on building one screen of my app at a time and see the results without having to navigate from the first screen to see the screen I'm working on over and over.

I could argue that I'm a more efficient programmer thanks to these changes. Perhaps I am more productive. But the truth is, I'm just happier using Continuous. I play with GUIs more now, trying new ideas and tweaking things left and right. It's quite liberating and plain old fun to get nearly instant feedback on your work.

I hope you find these features as exciting as I do. Please visit the website if you want more details on them, or throw caution to the wind and buy Continuous on the App Store now to see them first-hand.

Standing on the shoulders of giants

Continuous wouldn't be possible if it wasn't for .NET's great open source ecosystem. Continuous uses Roslyn for compiling C# and FSharp.Compiler.Service for compiling F#. Continuous also relies heavily on Cecil (what problem can't be solved with Cecil?) Also, Xamarin.Forms could only be included thanks to Xamarin open sourcing it.

And of course, none of this would be possible without mono and Xamarin.

Colophon

I wrote Continuous in F# using Xamarin Studio. The code is more functional than object oriented and uses a redux style architecture. I don't think I could have built such a large app with its sophisticated requirements without F# at my side. Three years ago I wasn't sure how to write GUI apps in a functional language, now I question why I haven't always done things this way.

praeclarum

Mar 21, 2016

Calca 1.4 for iOS - Awesome and Free

Today I’m pleased to release Calca 1.4 for iOS. This is an exciting release for me for two reasons.

First, this is the best version of Calc; it supports:

Plotting

Dark Mode

iPad Pro and side-by-side apps

Many keyboard fixes that will prevent text from jumping around

Second, I’m trying something a little crazy with the price - Calca for iOS is now free! This means more people than ever can try Calca and see how it can be used to solve their problems.

How can an independent app developer survive making a free app? Don’t worry, I didn’t fill it with ads. Except one small ad: a request for a donation to support the development of Calca. I’m hoping that users will find enough value in it that they will contribute some money towards its development. These contributions in no way add features to the app - they only remove the donation request. Your contribution enables me to continue working on that app, and I thank you for it.

The donations are tied to a time period. This provides a way for you to choose an amount that you’re comfortable with and also provides a way for you to donate multiple times if you’re feeling like an awesome superhero of a person. #justsaying

This is a bit of an experiment - one that I hope will pan out because I am truly excited to see how many people will use the app now that it’s free. That said, I am open to failure and learning from it. Let’s see how the patronage model works!

Anyway, enough about that. Go get Calca and get calculating!

praeclarum

Nov 9, 2015

Live Coding with Xamarin iOS

TLDR; I wrote a new Xamarin Studio add-in that dramatically reduces the number of Build and Run cycles you need to perform while developing an app. Please follow the instructions to install Continuous and let me know what you think!

UPDATE I renamed this project from "LiveCode" to "Continuous Coding" or "Continuous" for short because I was being harrassed by a bunch of britishers. Bullying works kids.

I Love My History

Since the beginning of time, there has been one limitation of running .NET code on iOS using Xamarin - System.Reflection.Emit doesn't work. That means you cannot dynamically create executable code.

It's not a serious limitation. .NET has had this ability for years but, as a community, we really only use it for one purpose: to make code fast. In that vain, this missing feature hasn't really been a problem for us because the slow path is often just fine.

But there's a second use of Emit: improving the development experience with things like REPLs.

While mono spear-headed the "C# Interactive" movement with the csharp REPL, they hadn't been able to give us that tech when running on iOS.

Until now.

Xamarin Inspector

Xamarin has released their Xamarin Inspector tool that acts like the developer tools that you get with web browsers.

It's really nifty. On one hand it gives you an inspectable visual tree of your live-running app - just like the DOM in a webapp. They even have a cool layer breakout 3D view.

On top of that, there is a REPL so that you can type in C# code and see the result. This acts like the "Command" window in the browser dev tools.

Put these two together and you have a fantastic tool to diagnose what a mess you made of the view hierarchy. ;-) Click the link above and install the Inspector, you won't regret it.

Yes, And

Yes Xamarin Inspector is great, and I want to see more tools along these lines. I especially can't wait to see if Xamarin uses this tool to help us write UI tests.

And yet, I have always been a bit unenthusiastic about classical REPLs. Surely it's fun to have a command prompt and play around a bit, but I have never been comfortable with the fact that you are not working with "real code" - the code that actually gets built to ship your app.

Since the second dawn of time, IDEs have integrated REPLs with real code with a simple trick: they allow you to select some code from your real code and send that over as a snippet to the REPL.

Even this stupid little convenience makes a world of difference. I use the heck out of F# Interactive which gives me this exact feature, and it's amazing.

Thanks to this tool, I find myself doing full app builds far less often.

Builds are the enemy for two reasons:

They lock up the IDE as you wait for big compilers to do their thing and as you wait for your app to restart. Of course, the IDE isn't frozen, but my mental state is. I cannot edit code because I might screw up the compilation and because the debugger will get lost. So I go into a mental spin-loop watching the progress bar. It's not healthy. (I used to check Twitter, but fixed that with an edit to /etc/hosts.)

Second, they re-initialize your context. If I'm working on one part of my app that's far removed from the initial screens, then I have to dig back through the app to get to seeing what I'm actually interested in. If I was a better automated test writer, or a better designer, or a better planner, this wouldn't be such a problem. But back to the real world...

A little while ago, I took a stab at doing something different from the REPL and wrote Calca. After some futzing around I found an environment that allowed me to see results as quickly as I could type them and it didn't have the annoying necessity to keep sending code to the evaluator.

I want something like Calca for my day to day work. I want to write code and see the results immediately.

Xamarin Released Something Awesome and I Hadn't Realized it Yet

While watching James Montemagno's live stream on the Inspector, I started to wonder how it worked.

I started to wonder if Xamarin snuck in dynamic assembly support into their newest versions. I wrote a quick app that referenced Mono.CSharp which hosts mono's awesome dynamic evaluator, then tried to run the evaluator and got what I expected:

System.InvalidOperationException

No dynamic code for you.

After James finished up, I installed the Inspector and laughed at some of my view hierarchies. Great tool.

And on a whim I ran my test app again, and you won't believe what happened next. The stupid thing ran.

That's right, installing Xamarin Inspector makes dynamic assemblies work. (On the simulator at least.) I don't know what dark and old magic makes this possible but the Xamarin engineers have come through again.

Well, we're given a hint into this dark magic. In the Inspector docs, this passage appears as a "known limitation":

As long as the Inspector addin/extension is installed and enabled in your IDE, we are injecting code into your app every time it starts in Debug mode

Haha, they call that a limitation. Dear Xamarin, enabling dynamic assemblies in all apps, at least in the development environment, is not only OK but please keep doing it. Please don't see this as a limitation - this is a feature I never knew was possible and I don't want to lose it.

When I saw my test program successfully evaluate code dynamically, I was aghast. Shocked because I didn't expect it to work, and horrified that by all the ideas that occurred to me. With great power comes great, oh forget it.

An Idea

Little known fact: I spam Xamarin with bug reports and feature requests on a monthly basis. They are very tolerant of me and I appreciate it.

One of my last crazy ideas was a tool that I want to see integrated into the IDE that would enable live coding scenarios - all in an attempt to break the Build and Run cycle. It was a play off of Inspector with a little bit of influence from Example Centric Programming (pdf).

The whole premise was that I wanted to see live evaluations of whole classes and modules while I was working on them without having to manually send snippets to a REPL. I wanted the tool to monitor certain classes and to visualize them whenever I changed them.

Imagine creating a UI layout. We have two options: we can use a designer or we can write it in code. With a designer, we pay the price of being separated from logic but are awarded with instantaneous feedback (or instantaneousish if using autolayout). With code, we have the full power of logic and data, but are stuck with the Build and Run cycle.

With live code, we can have the best of both worlds. We write the UI using code, but we see the effects of our code instantaneously.

Time to Hack

In two days I have been able to put together on tenth of the tool I described in my email. But even this small version of it has me really excited.

It is able to do two things:

Send code to the iOS simulator to be evaluated and then visualized. This is to enable classic scenarios where I sometimes just want to know the value of a particular expression.

Monitor whole classes that are evaluated and visualized whenever they are edited. This makes creating UIs super fun and is the part I'm most excited about.

Please go follow the instructions to run it and let me know what you think. (This only works in Xamarin Studio.)

I am not sure how well words can describe the tool, so I took the time to record a video of me using it. The video's a bit long, but I think you can get the general idea after just a few minutes (and if you skip the first 6 minutes describing installation).

Check it out:

Now What?

I hacked together a cool little tool that I'm pretty sure will become an invaluable asset. I still want to implement more of the features I described in my original design and make it work on other platforms.

Speaking of platforms, there is one major limitation: it only works in C#. While most won't see that as a limitation, I have been doing a lot of coding in F# lately and would prefer the tool to work with that.

Unfortunately F# doesn't ship with a simple compiler service like Mono.CSharp and I haven't tried yet to get the compiler to compile itself under Xamarin. I'm sure that this is technically possible, but gosh that F# compiler is intimidating and I don't know where to begin.

I'm also interested in seeing how much feedback this blog post and tool get. I often wonder if I'm just a nutter for hating Build cycles and can't wait to be validated or invalidated by your response.

So say hello to me @praeclarum on Twitter and let me know if any of this looks good to you.

praeclarum

Sep 14, 2015

Drone Builder - A Story of Drones, React, and F#

Drone Builder is a site I created to play with different DIY drone (multicopter) designs.

Building a drone isn't rocket science but there is a lot to learn when making your first one. You first have to learn what parts you need and what all their parameters mean. Then you have to learn how they combine to produce different effects. On top of it all, you have to do it on a budget.

It's a lot to take in, but it's also a trying task when you know all of that. You still have to track down shipment times, compare reviews, maintain Excel sheets - it's a messy process.

So, Drone Builder.

The UI is split into two areas: designs on the left and components on the right. Each component has a list of products sold by online merchants (Amazon and Banggood).

As you choose products on the right side, a design is built up on the left side. If you choose multiple products for a component, multiple designs will be built with all the possible combinations.

That combination of designs is the true power of Drone Builder - not only can you design one drone, but you can easily design multiple variations and compare them.

It's a fun little app, I hope you'll give it a try!

But you're not here for the drones, you want to know about this F# and React thing.

Historically...

To explain why I like React, let me compare it to the traditional way GUI apps are built.

I started building UIs with Visual Basic. In those days, application logic and UI logic mutated a large UI tree to create user experiences.

Well, it's still how we do it. The HTML DOM is a large tree that we can manipulate with JavaScript. Building apps in HTML is roughly how we did it in VB. We may use fancy binding libraries nowadays, but we're still mutating some application data and then mutating a UI tree to match it (and the other way around).

But, but, but. Time marches on and ideas evolve. We started to see some flaws with this architecture for apps.

First, it makes parallelism hard - if objects are mutated anytime, by anyone, then it's hard to write parallel tasks that you can trust.

Next, we started to notice the dependency graphs were becoming incomprehensible. If a tap mutates a property of object A resulting in an event that mutates a property of object B that then mutates a property of A - we get ourselves into a potentially endless update cycle. We've all added code of the like:

void HandleEvent() { updatingUI = true; UpdateUI(); updatingUI = false; }

All to break the mutation dependency chain for a brief moment. (Usually to guard against over-zealous UI events firing.)

Even if you manage to avoid cycles, you have a wild graph of objects with a plethora of implicit references - both explicit references and implicit references from events with closures. That is to say, you are creating fertile ground for memory cycles that keep objects around past their welcome.

To combat this, one usually has to write "unbinding" code. This takes the form of unsubscribing from events and disposing of objects we know to be useless.

It feels a lot like writing destructors in C++ - simple enough to explain: for every event you subscribe to, make sure you unsubscribe. It's a bookkeeping exercise; but who likes to keep books? One missed unsubscribe and you have a dangling object eating your memory and resources.

Lastly, mutation and its destruction of data becomes undesirable. Building an undo buffer becomes tricky business if we routinely overwrite data. Rolling back to a valid state after a failed operation is very tricky business. But these are trivial problems to solve when you don't destroy data.

The enemy has been identified as mutation - both mutation of application data and mutation of the UI tree.

React for Clean UI Layers

React enables creating UIs without mutation and rewards you for not mutating your model.

React flips this model by treating the UI like any other data structure. HTML entities, such as div - the analog of native "views", become light-weight objects instead of large and complex OS resources.

The idea is to map your application state into a light-weight UI tree. Data mapping is a familiar operation to any functional programmer and any .NET programmer that loves LINQ.

We never mutate the DOM directly. Instead we just keep creating new UIs - never destroying with mutation.

React then takes on the onerous task of synchronizing that tree with the heavy DOM. This is all done implicitly on behalf of the programmer.

Generally speaking this is a heavy-duty process, but its performance can be drastically increased if you use immutable data. This is because React can cache the results of previous generations if it is told that data hasn't changed. The only way to know if data hasn't changed is to compare it to old data - something that can only be done if you don't destroy the old data. Thus, immutability.

Writing these map functions can get a bit tedious - especially when designing UIs - so React introduces "components" with the JSX syntax. Each of these components maps a bit of your application state to UI state using declarative HTML syntax. Instead of `map`` functions, you write markup templates. For those familiar with XAML, this is very analogous to a XAML page binding to a view model.

React is nicely architected with an emphasis on composing apps from many of these small components - each responsible for just a small part of the UI. When you combine all these little components, you can build up an information-rich page.

Updating Application State

There are events in React, but you don't handle them the way you did in VB. That is, you don't mutate the app state, then mutate the UI component handling the event.

Instead of mutation, you clone the entire application state while making precision substitutions in that clone. This clone preserves the old application state while also giving the illusion of mutation.

You then notify the root of the UI tree - a component - that the app state has changed. It re-maps itself (a process called "rendering" in React) and recreates the UI tree. The DOM is subsequently (implicitly) mutated to match that tree.

You end up with an app that centralizes app state changes. Facebook has even gone so far as to codify such centralization in their Flux library.

About that Persistence...

To ameliorate the cost of cloning, persistent data structures (or, immutable data structures) are used. These are designed to make this "clone with substitutions" trick easy on the CPU and memory.

There's just one problem in this new React world - JavaScript.

I have nothing against JavaScript - I find it to be a rather enchanting language in fact - but it was not designed with immutable data structures in mind. It has no syntax to help declare them. It has no syntax to clone them. It only knows about reference equality - not structural.

Facebook, the creators of React, recognized this and built another library to help out. This one is called Immutable. It's a brilliant little library (50KB minified) that adds a lot of persistent data structures to JavaScript. If you're willing to forego JavaScript standard way to create objects, then this library puts you well on the way to success.

But, but, but. Immutable is great, but there's a bit more needed to write persistent transformations than what a set of generic data structures can provide.

Ideally, you will have a programming language that takes immutability seriously. Something like F# (or Elm, or Swift, or ...).

F# to the Rescue

Not only does F# have immutability baked into its design, but it has a large mature library of algorithms, data structures, and abstractions to help you write the logic for your app.

When I think of F#, I think of the Seq type. This is your generic pull-based infinite stream of data and F# has a wonderfully powerful set of operations for working with them in non destructive ways. It's a very useful tool to have at your disposal, and it's a missing feature of JavaScript.

For data modeling, F# also has union types and record types both with automatic structural comparison and hashing. These types are more specific than "plain old objects" and can be used to create more precise models of your problem.

From a programming standpoint, F# is great due to its simple and powerful syntax. Functions are quick to define and easy to combine into chains. The syntax is driven by whitespace so its easier to refactor and move code blocks around in than, say, our curly-brace endowed languages.

And let's not forget F#'s other advantage: F# Interactive. It is a REPL that allows you to execute code while you're writing it. F#'s take on the REPL - F# Interactive - has nice IDE integration that makes writing apps an amazingly satisfying experience.

If you would like to read more from me about using F# to create GUIs you can look at my slides from .NET FRINGE 2015.

But why am I talking about F#, isn't Drone Builder a web app?

FunScript is Amazing

There is an insane library out there called FunScript that can output JavaScript code from your F# code.

Why do I say "insane library" and not "cool transpiler"? That's because of its implementation. It turns out that F# has some amazingly powerful reflection capabilities that include the ability to retrieve the abstract syntax tree (AST) of your entire app.

Constructing the AST is the first step to building a compiler or transpiler. Normally you write a parser, and then a type system, and then a module system... You then write tricky code to add types to expressions and create data structures to form the AST. It's a lot of work. But it's exactly the work the F# compiler already performs whenever you compile your app. The genius of F# is that it makes the results of that effort (the typed AST) available to you at runtime.

All you have to do is mark the modules of your app with the ReflectedDefinition attribute. With that, the F# compiler will retain the AST and make it available to your app (and libraries like FunScript).

FunScript, armed with the full F# AST, is then able to generate JavaScript. This process, in general, is difficult and error prone (translating between two virtual machines) and FunScript handles it with aplomb.

It has an amazingly simple way to replace F# expressions with JavaScript versions using just an attribute. This little trick enabled the FunScript authors to port large swaths of the F# standard library (Core) to JavaScript and also makes it easy for your app to interact with other JavaScript libraries and the DOM not covered out of the box.

One other great bonus for using F# with FunScript is IntelliSense. Dynamically typed languages like JavaScript are hard to provide completion info for. But for statically typed languages, like F#, code completion is nigh trivial. That is to say, I get full IntelliSense as I'm coding my web app.

Part of that wonderful editing experience is thanks to the TypeScript team and their effort towards wrangling JavaScript libraries to publish "declaration" files. These files add type information to otherwise untyped JavaScript libraries. FunScript is able to use those TypeScript declaration files to provide IntelliSense for working with external JavaScript libraries and the DOM itself. It's fantastic.

Putting it all Together

So how do you build one of these React + F# apps? Let me walk you through Drone Builder's architecture.

The Data Model

Let's with the data model. The usual product-based suspects are declared:

type Component = | Frame of FrameOptions | Motor of MotorOptions | Esc of EscOptions | Propeller of PropellerOptions | FlightController of FlightControllerOptions | PowerDistribution | Battery of BatteryOptions | RadioReceiver of RadioReceiverOptions | RadioTransmitter of RadioTransmitterOptions type Product = { Name : string Key : ProductKey Url : string ImageUrl : string DeliveryTime : int Price : float Currency : string Components : (int * Component)[] } type DesignComponent = { Key : string ComponentInfo : ComponentInfo Component : Component Product : Product } type Design = { Key : string Components : DesignComponent[] Purchases : (int * Product)[] }

That's it. These types - 3 records and 1 union - comprise most of the data model. Products represent something that you can purchase online and contain a set of Components (and quantities). There is not a 1-1 mapping between products and components because online merchants love to bundle things together.

A Design and DesignComponent is one specific way to build a drone. They are calculated from an analyze function. More on that later...

There are also the Options types - these are just additional bags of data attached to each component class. Here's the MotorOption to give you a flavor:

type MotorOptions = { Weight : float VelocityConstant : float Diameter : float MaxCells : int Model : MotorModel }

(Note that I'm able to make use of F#'s units of measure.)

Products are assembled together into a big global variable called products:

let products : Product[] = [| { Name = "EMAX MT2204 KV2300 + ARRIS 12A 2-3S ESC" Key = "A-B00Y0J5WLY" Url = "http://www.amazon.com/dp/B00Y0J5WLY/?tag=mecpar-20" ImageUrl = "http://ecx.images-amazon.com/images/I/51bguFHIyFL.jpg" DeliveryTime = 6*7 Price = 105.00 Currency = "USD" Components = [| 4, Motor { Diameter = 27.9; Weight = 25.0; VelocityConstant = 2300.0; MaxCells = 3; Model = MotorModels.M2204_2300 } 4, Esc { Weight = 12.0; ContinuousCurrent = 12.0<a>; BurstCurrent = 20.0</a><a> } |] } //... |]

First I played with loading the catalog from a JSON file - but eventually didn't see the point in writing all the serialization/deserialization functions. F# has a very clean data declaration syntax, why not use it?

The downside is that the catalog gets merged into the code - but it sorta doesn't matter because web browsers will need to download the code + catalog anyway.

Application Logic

The application's logic is simple enough to state:

Users select products for components. Multiple products can be selected in one component. Whole components can be skipped if the user doesn't care to choose.

Designs are produced by finding all the valid combinations of product selections.

Stats are generated for each design to help the user choose between them.

From a code stand-point, this boils down to needing to keep a set of selected products (per category), then writing the design combinator, then deriving stats.

A sketch of it looks something like:

type SelectedProduct = { ComponentKey : CompKey ProductKey : ProductKey } let getDesigns (selProducts : Set) : Design[] = ...

This function was not easy to write (80 loc, factored into 8 functions) and I won't bore you with its implementation. I will say that it uses F# collections and F# pattern matching to great effect and I would be hessitant to write that algorithm in another language. It has to take care of generating combinations of designs and distributing bundled components - it sounded so easy when I first started it. :-)

It also calculates stats about the drone using a combination of physics calculations and data measured from motors.

Unfortunately motor profiles are terribly measured. There are about 4 variables you need to calculate thrust from a motor and online motor profiles often only provide you with 4 data points. In order to make any inferences from this terrible data, I had to write fancy math functions that calculate Jacobians on the fly to do linear extrapolation. Again, I'm thankful I had F# to help me through writing that code. Here's a little snippet:

let getMaxThrust (v : float) (c : float</a><a>) (d : float) (p : float) (m : MotorModel) : float * float = let nearestPoints : MotorModelPoint[] = ... let p0 = nearestPoints.[0] let diff (fy : MotorModelPoint -> float) (fx : MotorModelPoint -> float) : float = ... let dtdv = diff (fun x -> float x.Thrust) (fun x -> float x.Volts) let dtdc = diff (fun x -> float x.Thrust) (fun x -> float x.Current_) let dtdd = diff (fun x -> float x.Thrust) (fun x -> float x.Diameter) let dtdp = diff (fun x -> float x.Thrust) (fun x -> float x.Pitch) let t = float p0.Thrust + dtdv * float (v - p0.Volts) + dtdc * float (c - p0.Current_) + dtdd * float (d - p0.Diameter) + dtdp * float (p - p0.Pitch)

Who says you don't get to use calculus in your day to day work?!

So that's about it for application logic. Time for a UI!

The Reactive UI

The UI is built up using a mix of HTML and custom React classes. Each React class is backed by an F# View Model object.

The view models are declared as an F# tree rooted at the "app" view model. This tree gets transformed into the React component tree by the JSX declarations.

Let's look at one of the nodes on that tree. Here is the view model for the component selectors on the right side of the app:

type ComponentView = { Key : CompKey Info : ComponentInfo Options : OptionView[] Products : ProductView[] }

This view model is then paired up with a React JSX class (this is JavaScript):

var ComponentSelector = React.createClass({ shouldComponentUpdate: function(nextProps, nextState) { return !componentEq (this.props.component) (nextProps.component); }, render: function() { var comp = this.props.component; var info = comp.Info; var products = comp.Products; var options = comp.Options; return ( <section><h1>{info.Title}</h1> </section> ); } });

That JSX declaration does a lot of things:

It tests if it even needs to be updated by comparing its old binding to the new one. Doing these checks drastically improve's React's performance. In fact, it's the whole reason we're using immutable data structures to begin with (and, therefore, the whole reason I'm writing this article). The comparison is done by the componentEq global function; more on that later.

The render function declares the outputted HTML.

It also continues the mapping process by combining React classes with F# view models.

It's pretty simple huh? Your UI layer becomes very straight-forward to write. It's basically all about unpacking variables, choosing some HTML, and then messing with CSS to get everything to look nice.

Handling Events

The most important interaction in the app is the user toggling whether a product is selected. This is handled in the Product React class:

var Product = React.createClass({ handleClick: function(event) { setProductSel (this.props.productView.ComponentKey) (this.props.productView.ProductKey) (!this.props.productView.Selected); }, render: function() { var pv = this.props.productView; var priceEach = pv.PriceEach; ... return ( <div> <div> <img src="%7Bprod.ImageUrl%7D" alt="image"></div> <div> {price} <span>{prod.Name}</span> </div> </div> ); } });

When a product is clicked, the global function setProductSel is called. Let's take a look at it:

let setProductSel ck pk s = let k = ck, pk if s = TheApp.SelectedProducts.Contains k then () else let a = TheApp let newApp = if s then { a with SelectedProducts = a.SelectedProducts.Add k } else { a with SelectedProducts = a.SelectedProducts.Remove k } updateAppState newApp

where TheApp is a global variable of type:

type AppState = { SelectedOptions : Set SelectedProducts : Set }

setProductSel is given a component key, a product key, and whether it is selected. It then recreates the global app state with that new information. If passes that app state onto the updateAppState function:

let updateAppState newState = TheApp <- newState TheAnalysis <- analyze newState for l in TheAppListeners do l ()

This is, basically, the only mutation in the app. It replaces the global app state with the new one (I could just as easily have retained it to create an undo buffer or something.)

It then calculates a new "analysis" which is just the rooted view model tree.

Lastly, it fires off an event to let the UI know that the state has changed.

The Application Root

I've described how the app runs, but how does it get started? This is the final bit of glue that merges the React class world with my F# view model world:

var DroneApplication = React.createClass({ getInitialState: function () { var t = this; registerAppListener(function () { var a = getTheAnalysis(); window.location.hash = a.LocationHash; t.setState ({ app: getTheApp(), analysis: a }); }); return { app: getTheApp(), analysis: getTheAnalysis() }; }, render: function () { var analysis = this.state.analysis; var comps = analysis.Components; return ( <div> <header><summary> {comps.map(function(c) { return ; })} ); } }); React.render ( , document.getElementById('content'));

You can see that the first thing the root class does is to register for those state-updating events. It then returns the global app state (and analysis) as its own state. When an updated event is fired, it fetches its new state and invalidates itself.

And that's it! The rest is just writing more view models and more HTML and CSS.

Generating JavaScript

I have completely ignored the actual process for getting all this code into a packaged form. I'll try to outline that process now.

F# Console App

Start by putting all the F# code into an F# console app. This is convenient because we can "run our app" from the command line to test our logic or do other wacky things.

It's also necessary to have an app and not a library because someone has to call FunScript to generate JavaScript.

[] let main argv = let js = FunScript.Compiler.compileWithoutReturn <@ appMain() @> let d = "../Site/build" System.IO.File.WriteAllText (System.IO.Path.Combine (d, "client.js"), js)

The main entry point for our console app calls the FunScript compiler by passing it a reference to a function called appMain. All code referenced by appMain will end up getting compiled (FunScript has a nice dependency walker). The console app ends by dumping out the generated JavaScript.

My appMain function acts like a standard JavaScript module and exports a set of functions. Since I'm doing this in the browser, "export" means that I assign it to the window object (it's fine).

[] let external (n : string) (f : 'a) = () let appMain () = external "registerAppListener" registerAppListener external "loadPreviousAppState" loadPreviousAppState external "setOptionSel" setOptionSel external "setProductSel" setProductSel external "getTheApp" (fun () -> TheApp) external "getTheAnalysis" (fun () -> TheAnalysis) external "analysisEq" (fun (x : AppView) (y : AppView) -> x.Eq y) external "componentEq" (fun (x : ComponentView) (y : ComponentView) -> x.Eq y) external "designEq" (fun (x : DesignView) (y : DesignView) -> x.Eq y)

This appMain is perfect because it's easy for me to make F# functions available to JavaScript and it also satisfies FunScript's dependency checker. As a final bonus, it's compatible with Google's Closure Compiler.

It's a Bit... Big

We're doing great, all the F# code has been turned into JavaScript thanks to the magic of FunScript. But, the code it generates isn't optimal. One might even say it's unoptimized. It repeats whole expression branches when it doesn't need to, it loves generating empty expressions, and it does not share generic implementations.

The 2,500 lines of F# code (1,200 logic + 1,300 catalog) get translated to 934 KB of JavaScript... A bit much.

Google to the rescue. Google has a fantastic JS compiler called Closure that does all the gross data flow analysis needed to clean out fat code.

I just crank it up to its max settings, pass it the generated code, and out pops a 168 KB minified file. Magic.

My Makefile

Yes, I still use Makefiles. Here's what building the app looks like:

OPTIMIZATIONS = ADVANCED_OPTIMIZATIONS all: public/index.html public/site.js public/site.js: build/client.js build/components.js Makefile java -jar build/compiler.jar --externs react-externs.js --compilation_level $(OPTIMIZATIONS) --js build/client.js --js build/components.js --js_output_file public/site.js build/components.js: src/components.js jsx src build build/client.js: ../Scraper/Client.fs ../Scraper/Model.fs ../Scraper/Catalog.fs xbuild ../DroneBuilder.sln mono ../Scraper/bin/Debug/Scraper.exe

This file describes the 3 phases of the build:

Using mono & FunScript to generate JavaScript

Use JSX to generate more JavaScript

Using java & the Closure Compiler to smoosh that JavaScript

(The F# conolole app is called "Scraper" - for reasons.)

Concluding thoughts

And would you believe it, it all works!

I am quite proud of the app. At first it was supposed to be a quick toy to help me with my hobby, but it quickly became a furtile ground to try out some new ways to build apps.

I am completely sold on this way to architect apps:

All data stored in immutable structures

Mutation localized to very specific points

Generating light-weight view hierarchies from that data

While I sometimes miss VB and mutating all the things - I don't miss the bugs.

Apps written in the functional style are easier to write, easier to understand later, and easier to extend to new scenarios.

There are tradeoffs of course. Functional languages and libraries are great at handling trees but they suck at graphs - and I find most apps to be graphs.

And then there's FunScript

I love FunScript, it's one of the best transpilers I've ever used, but I don't think I'll ever use it again.

The problem is that it just doesn't do any optimization and ends up generating code that JavaScript just can't handle. For instance, Drone Builder is very slow when you first start clicking around using an iPhone (it's fine on Desktop browsers). It takes a long time for the browser to JIT all the methods it needs to to make the app run fast.

On top of that, the error reporting in FunScript is horrendous. I love getting errors like "never" and "interface not found" with absolutely no indication which line of code triggered this bug.

I gave up on this project once because I couldn't understand one of these errors and didn't know what to change. (Finally I got lucky and changed the right thing.) Then it happened again towards the end of the project when it refused to compile equality comparisons.

Now equality comparisons are one of the main reasons I'm using functional data types. It was a real blow but I pushed on and wrote my own equality comparisons (had gone too far to give up).

These types of problems are to be expected with a project like FunScript - bugs happen. The real crux thogh is that the maintainer of the project hasn't worked on it in a while and is not interested in continuing work. SO I'm not seeing a bright future of these bugs getting fixed.

The good news though is that this app has validated this style of programming. I just have to work on what tools I use to achieve it.

praeclarum

Sep 14, 2015

Drone Builder

praeclarum

Sep 4, 2015

My Complaints with Nuget 3

Argument Nuget 3 makes my life harder, all in the name of solving a problem I don't have.

Lemma And it doesn't solve any of the problems I currently have.

Conclusion @#$E%^&&^%! meh...

My complaint against nuget 3 comes from its added burden and complexity hefted onto library developers.

Prelude

Let me start by putting my cards on the table: if it's hard for me to support your platform in my library, I'm not going to bother.

In my mind then, every effort towards improving nuget has to improve it from a library developer's perspective. If you make it easy for developers, nuget will be filled with awesome libraries that can run on the ridiculously large number of runtimes. The ecosystem and community grow and we all get back to our jobs of making fun of C++ and JavaScript programmers.

If, on the other hand, you make it hard, as has been done with nuget 3, you get a whopping "meh" from people like me and a o_O from the community.

Library developers start on a platform. I start on Mac or iOS. I have only ever started two libraries where I set out to make them cross-platform. The rest I made cross-platform either because it was trivial (start with a PCL, more on that later) or because I was willing to make the Herculean commitment to make it cross-platform.

I say commitment because anyone can create a library once - a nuget (even a nuget 3) package is a tolerable time investment. What's not tolerable is creating build scripts and build servers that can compile and package everything every time I make a minor code change. Then getting those build bots configured in a way that the community can use them? Forget about it. (I don't mention any of the commercial build services because it's hard to justify monetary investment in OSS projects. I don't mention any of the free build services because they don't support my kind of builds which usually involve Xamarin.)

Back to platforms. Now, I've started a new library on a platform.

In the bad old days before PCLs, to release the library, I would have to make a bunch of junk projects for each and every fragment of .NET, all to convince msbuild to make me a bunch of binaries. This is just a silly assortment of meaningless names - Windows RT, Windows Phone, Windows PCL, Windows UWP, blah, blah, blah. 1 Library turns into N Projects.

(Personally, I see this as a major design flaw of msbuild. Imagine how different the .NET ecosystem would be if msbuild was actually a Common Language tool that could handle sources from multiple languages, imagine if it could output binaries not tied to a single platform, but "fat binaries" that just worked. Imagine if it was a build bot and not some CLI app from 1970. This is a tool I've written for myself a couple times when I was in my deepest throws of nuget and .NET cross-platform depression. Never released any version of them cause they play hell with the IDEs, but man it bothers me that no one else sees the project system to be one of .NET's major flaws (more on that later).)

Thankfully, PCLs came and saved the day. 1 Library remains 1 Project. I could ignore .NET fragmentation if I just picked one of the supersets. This means that the majority of my libraries and code could now be shared without creating a hundred meaningless project titles and build scripts. I even write my apps using PCLs even when I don't care about cross-platform. I do it because I might want to take that code and open source it. This is how I've always worked - I see a chunk of my app that I think others could benefit from, then I open source that bit.

With PCLs, open sourcing a library became trivial. I write a terrible XML file, I don't have to create any new projects, and I just put nuget in my Makefile. Done. (And sorry that I'm conflating "Open Source" with "nuget", but most .NET devs won't even blink at a lib unless it's on nuget.)

Of course, the necessarily platform-specific bits would have to be shaken out into their own projects. It's not a perfect system, but it's manageable. 1 Library turns into M Projects where M is the number of platforms I actually care about (it's not the multitude of .NET fragments). This isn't like a PCL where I want to run everywhere - this is a platform specific lib and I take on all the effort and commitment that it implies. (I wish this effort was smaller, but the IDEs don't seem to care about library authors.)

Enough Prelude, What's Wrong with Nuget 3?

Nuget 3 was an opportunity to fix the few things wrong with nuget and make the world a better place. Nuget 2 has a couple design mistakes that I would love to see corrected in a new version:

It has no concept of "families" of libraries so platform specific libs - or libs that have been partitioned on one axis or another - each act like standalone libraries. Look at the hilarity of the FunScript libraries. Look at the FSharp Data providers. Or, if you have a sufficiently stiff drink nearby, look at the numerous ASP.NET libraries. I have no idea what any of them are or how they're related. Nuget has a very simple dependency graph that concerns itself only with binary dependencies, not conceptual. That's to say, it works fine for machines, but is a long way from humane. If libraries could join families - the catalog could be cleaned up and lib devs would feel safer partitioning their libs.

That partitioning I mentioned? Libraries get split up for millions of reasons. Perhaps it's due to platform. Perhaps a large feature is split out. Perhaps the lib developer loves the modern world of 1 class per library. Whatever their reasons, almost all large nugets are partitioned. Unfortunately, nuget (and its UI) leave it up to the consumer to reason what those partition axes are and how they apply to a project. If these axes were first-class (reified, whatever), we could turn the catalog into well organized and friendly place for both lib developers and consumers. Instead, it's just an FTP directory with a bunch of DLLs in it with a big sign: "You better RTFM"!

Even the the simple dependency system is broken. If I add library A that depends on B, then remove A, I still have B lying around. This is just an embarrassing bug that should be fixed.

OK, maybe it's unfair to judge nuget 3 on what it's not. But with its slow update cycle - seemingly tied to Visual Studio - it's hard not to regret missed opportunities.

Nuget 3 upheaves the entire ecosystem. Old nuget: PCLs + Platform Specific bits (finally we hit a panacea). New nuget: PCLs? (maybe? I honestly have no idea if I'm supposed to write PCLs anymore) + Platform stuff + CoreCLR. Wait what? CoreCLR? You mean that thing that still can't run Hello World yet? My nugets got torn to shreds to support that thing? I know it's the future, and it's an exciting future, but OMG we are a long way from there. You have introduced a new platform (that doesn't work) and said that nuget is now based off of it.

Seriously, are PCLs deprecated now? A running theme in my criticism is a lack of communication about how to write libraries in this new world. I know enough to know that nuget 3 has a complicated facility to resolve between PCLs and "dotnet" - so I guess PCLs still work? But am I supposed to stop making them? Should my cross-plat libraries be dotnet based or PCL? No one will stand up and answer that question without their own several-paragraph prelude. If "dotnet" is the future, it's one shrouded in mist.

I am so confused by DNX, DNVM, and that thing called project.json. I have no idea if these things are related to nuget 3, but they have the same scent. Let me repeat, I have no idea if this nuget 3 stuff has anything to do with those techs. I am so confused by buzzwords and cute project names and blog entries that I've completely lost the narrative. Those tools are supposedly how you run code on the CoreCLR (why oh why? why couldn't we just have a simple executive. Oh? Because web people environment variables? srsly?)

Or was it package.json? Confusion continues. Maybe next year we'll have purpose.json. And then the year after, promise.json. And then, no-seriously-use-this-project.yaml (haven't you all noticed yet that JSON is a terrible format for hand editing? XML is easier. YAML is easier. JavaScript is easier. TSON or any of the other *SONs are easier.).

Let's say I choose to embrace "dotnet". Well, I can't because Xamarin doesn't support it. This is a letter to Microsoft, so perhaps you don't care. But it's my main form of .NET consumption. If Xamarin doesn't support it, it might as well not exist. I can guarantee you I will actively ignore nuget 3 until Xamarin supports it.

Still hypothetically embracing "dotnet"... what is up with the manual dependencies? Breaking up the BCL is some sick joke. I was in denial for a long time, then I got angry, now it just makes me sad. There is no more stdlib. We get, what?, int and string? And now I have to import libraries for everything else? This partitioning may have some technical benefits, but I don't see them. It's just added effort for what? I guess I can now run newer versions of System.Collections and old versions of System.Text? In what world does someone need to do that? A reminder: users of .NET are on a platform - we may like to consume cross-plat libraries, but we use a specific platform. I use mono. It updates its libraries every year or so. It's an exciting time of year - retesting apps and making changes and filing bug reports. The thought of libraries now following their own independent release schedules just makes me shutter.

Whatever, I'm on the losing side of history for wanting a monolithic class library. So let's say I fall down into a well and my only way out is to solemnly commit to embracing "dotnet". I am still confused about their relationship to PCLs. Every time I hear someone discuss the resolution rules for nuget 3, I dream of my peaceful days back in that well. If I install Visual Studio 2015 Community edition (thanks so much for that btw!), and I create an additional project in parallel to my PCL project. Now I'm managing two project files instead of 1. One is classy and takes care of itself. The other has brain damage and I need to hand hold it and it's 100 dependencies. Or am I supposed to throw out the PCL?

Let's say my time out of the well has reformed me and the CoreCLR is actually a viable target. Well, nuget3's file format is still a terrible bastardization of something that used to be simple. We keep shoving more and more rules and features into this schema that the file is a mixture of configuration and convention. I keep mentioning the resolution rules for nuget 3. Where are they written down? Which binaries does XS or VS pick given the set of available platforms? There are blog posts that make rough English impressionist style drawings of this algorithm - but nothing definitive.

What I really want is a matrix with "nuget platform" as one axis and "real platform" on another. Now, if I want a library that I know works on a given "real platform", then I merely have to look on the row and find which "nuget platforms" that corresponds to. Ideally, an organization with funding would maintain this matrix - Microsoft, the .NET Foundation, Xamarin, Mono, anyone. Except "the community". The .NET community is important, but since we don't get a say in nuget design decisions and since this matrix is becoming more and more complex with every nuget release, the people doing the damage should take responsibility.

I am sad that I desire such a matrix. Sad that .NET has fragmented so much that it's needed. But instead of nuget 3 coalescing that fragmentation, it just created more.

Caveat

You may be reading this document and shaking your head "he just doesn't get it".

That is 100% possible. Maybe nuget 3 actually improves my life and I'm acting like an out of touch old codger.

But I guess that's my point too. If nuget 3 really is a fix for the fragmentation problem, then why is the present so gray and cloudy? Why are OSS library devs who have been doing this stuff for years so confused? For goodness sake, even Newtonsoft is confused and they are Microsoft's darling example.

Why isn't anyone shouting "PCLs are dead, all hail the Core CLR and it's 100 dependencies!"

Is nuget 3 ahead of its time, or simply the answer to the wrong question? Only time will tell I guess.

praeclarum

Jun 3, 2015

Introducing the iCircuit Gallery

TLDR; I wrote a website to share circuits made with my app iCircuit and I hope you’ll check it out.

Finally, a place to share

iCircuit users create amazing things. For the past 5 years of reading support emails, I have been privy to just a fraction of these wonders. Circuits far bigger than I ever thought iCircuit could handle - circuits that were clever and required me going back to my college texts to understand - and circuits that just made me laugh. I learned something from each of them.

It was a shame that all these wonders were hidden in my inbox. Well, no more.

Introducing, the iCircuit Gallery - a community driven web site full of circuits.

Now iCircuit users have a place to upload their and share circuits with the world. Each circuit is lovingly rendered in SVG and can contain rich textual descriptions of the circuit. Even if you’re not an iCircuit user, you can still learn a lot from the gallery.

I have seeded the site with the standard example circuits and Windows Phone users have (believe it or not) been able to upload circuits for years - so the site has some initial work in it already. But,

I am asking iCircuit users to share their designs - big or small - novel or standard - brilliant or otherwise. Share them with the world! There is great satisfaction to be had in sharing your work with others. I hope also to see educational examples pop up that take full advantage of the ability to document the circuit.

Simply click the Upload button, create an account (email optional), and pick the files off your device. Right now, that means Mac and Windows users have the easiest time with the gallery. I am working on iOS and Android updates to make uploading a snap there too.

I am very excited to see your designs!

Future Improvements

I have lots of ideas on how to improve upon this initial release but hope to get some feedback from the community before pursuing any of them. For example, I hope to add Tags to help organize things and Comments if contributors desire.

Also, I will be integrating the gallery into the app to make browsing and uploading easier. Keep your eye out for updates!

Colophon

Oh my, I wrote a website! With servers and all that. Part of the reason it took me 5 years to write this thing is that I am scared to death of running servers. My ability to manage a server only gives it a life span of a few months before some hacker is using it as a spam bot.

So what’s changed? App hosting is what’s changed. I adored Google App Engine for it remedied the whole server problem - host apps instead of servers - genius! They provided a great database and a great toolset.

But it wasn’t .NET and I always wanted to run the iCircuit engine on the server.

And then Azure came along. Azure has a million enterprisy “solutions” and one awesome service called Mobile Services. But they theirCloud Service was the most confusing thing ever. It acted like an app host but also acted like a server. Which was it? So very confusing.

Well, Azure fixed that with a Web Apps service. Finally, after that little marketing spin and an assurance that I’m not managing a server, I became a customer.

Building the site was a snap with ASP.NET MVC. My only possible mistake is that I’m using Azure’s Table Storage - not sure how that decision will pan out. I foresee a future of migrating to SQL...

I am also scared to death about cloud pricing. Every page on the site has an HTTP and memory cache of 5 minutes. It’s ridiculously high. Almost as ridiculously high as my fear of cloud service pricing.

But there’s only one way to find out...

praeclarum

May 28, 2015

Coding in Your Head

I’m terrible at coding interviews - some busy bee dusts off a tricky algorithm that they studied in college and asks you to (1) originate it from a poorly stated problem and (2) live code it in front of them.

This isn’t how I work. Like most programmers who survive more than a few years in this business faced with a novel or difficult problem, I do the majority of my design work in my head - slowly.

Realm of Endless Possibilities

The problem gets repeated endlessly:“The user wants to accomplish X, Y, and Z - I will need to talk to data sources I, J, K - I will use algorithms A, B, C - they are connected in this configuration or that - information will be on a screen that looks like...”

I try out all the permutations of data structures, objects, their relationships to one another, algorithms that I already know, and algorithms that I note to seek out. I think through the user interface - attempting to limit the number of choices the user has to make to do repetitive tasks while still trying to giving them a new power.

Steeped in years of OOP programming, all this design work culminates in an object schema in my head. Known classes and their relationships to other classes are built and toyed with. I refine this graph by running many algorithms across it to see how nasty my layers of abstraction and encapsulation make moving data around (remember, in the end, the most important thing to your program is the data - not how you represent it). I look at it to see how easy it will be to extend or flat out replace in the future.

This is a slow process. It’s why I have a list of 100“potential next apps”. They’re up in my head (or at least a few top candidates) while I toss them around and poke and prod at their code.

Coding It

Once a design is deemed robust, useful, and interesting enough, it’s time to sit down and code it. At this point you are basically limited by your programming language. This is why I’m a programming language nerd and relentless critic.

I don’t care about powerful programming languages because they save me from typing. I care about them because they allow me to get closer to my mental design than less powerful languages.

Designs of the mind are necessarily abstract - unconcerned with particulars of language. My“head design language” is just objects, interfaces, methods, properties, and events. Call this OOP 1.0. (As I learn functional programming, my language is slowly turning to records, abstract data types, interfaces, and functions.)

When I sit down to write these, any boilerplate that the language forces on me becomes an annoyance. C++ and Objective-C that require designing a memory strategy are profoundly annoying (I can barely get my own designs right, and now the fracking computer needs help too?). C#’s lack of metaprogramming and first class events is another annoyance. F#’s single-pass compiler that makes you order every declaration and even your source files (seriously, what decade is this?) is, you guessed it, annoying. Even trivial syntax gets annoying at this point - why do I have to write all those silly characters? { ; } oh my.

The tools we use also become obstacles. Intelligent IDEs that are intended to make coding easier become enemies with every spinning beach ball - with every hidden setting - with every error message. Imagine trying to create an intricate sand castle on the beach during a hurricane. No wonder text editors such as Sublime are such hits.

So your beautiful mental design gets compromised into some language or another. This is why we call it coding - we are encoding a design into some barbaric text format that only highly paid professionals and intelligent 13 year olds can understand. Anyway...

That’s all to say that it’s best to burn through all the bad designs in your head so that only the decent ones have to suffer this transition to code.

Some More Thoughts

It’s a slow process but it can’t be sped up. No, test driven development is not an answer. TDD causes you to hash out a design - but one that’s biased to one consumer - the tests. It neglects the most important consumer - the end user. Also I am happy to throw out a design that I’ve been mulling over for a week. I have never once seen a TDD advocate throw away a week’s worth of Asserts - no they just get painfully“refactored” into the next design option.

It’s not a perfect process because your initial designs are never right. Certainly it saves you from writing endless numbers of throw away prototypes before you settle on a good design - but it won’t be a perfect design. It will have to be changed once you’ve implemented the app and learned what the app really is and how people really use it.

praeclarum

May 25, 2015

Many Levels of Rejection

Submitting apps to the App Store is filled with many wonderful opportunities to be rejected. Let’s count them!

1. Compiling/Building your app is the first possible level of rejection. It’s usually your fault, but some days...

2. Signing your app is also an adventure in rejection with the added joy of creating multitudes of profiles and app IDs than you really don’tknow what to do with but are too afraid to delete.

3. Sometimes the phone itself will reject you next. Maybe Springboard is having a bad day, or maybe you really have made a mess of those profiles...

4. Hey look at me! The watch wants in on this game too! It likes to reject you for a variety of reasons but doesn’t like to tell you which. You’ll have to dig into the logs to find its secret motives.

5. Time to submit that puppy and get rejected by the iTunes Connect! iTunes is actually pretty good at this whole rejection thing and does its best at helping you through the difficult times.

6. Well now that you’re uploaded, surely the app... whoops. Nope. Time for the little Prerelease Binaries to reject you. Oh you didn’t know about that esoteric requirement? You read every guide, right? Right?

7. Time to submit for review and let the humans... nope, wrong again. Another computer can reject you now before a human ever sees it. Watch your inbox cause iTunes Connect has no idea what that computer is doing.

8-1,000. Finally after all that, you can be rejected by a human. This rejection process is long, filled with unspoken truths, false assumptions, and bitter quibbles. But at least it’s a human...

1,001-1,024. It was all worth it, your app is in the store and is running gr... oh, it crashes on iPad 2s when you rotate the screen during the 5th moon of the year.

So close.

praeclarum

Mar 20, 2015

Functional GUI Apps with F# at FRINGE

I’m speaking at FRINGE!

My topic is“Functional GUI Apps” in which I will try to convince you that GUIs don’t have to be tangles of state transforming events.

When I first started using F#, I immediately recognized it as a wonderful data modeling and transformation language. But I wasn’t quite sure how to write GUI apps with it.

Sure, F# supports writing object oriented code so you can build UIs just as you would in C#. But I wanted more - I wanted my UI layer to have the same elegance as my model layer.

After a few false starts, I not only found that it’s possible to do GUIs in a functional style, but that there are a lot of advantages to doing it that way. I will be presenting my findings and also the work of others that have played in this fun world.

Mea Culpa: Dedicated readers will note that I whined a lot about not getting to speak at FRINGE just a couple days ago. Why this change of news? One simple answer: I’m terrible at email. Yes, my talk got accepted and, through my own cleverness, I never saw the acceptance letter. Some days... some days...

praeclarum @praeclarum - Tumblr Blog | Tumgag (2024)

References

Top Articles
Vancouver Wa Missed Connections
Black Panther 2 Showtimes Near Amc Dublin Village 18
Use Copilot in Microsoft Teams meetings
Faridpur Govt. Girls' High School, Faridpur Test Examination—2023; English : Paper II
Monthly Forecast Accuweather
Kokichi's Day At The Zoo
Team 1 Elite Club Invite
CKS is only available in the UK | NICE
Mlifeinsider Okta
Sitcoms Online Message Board
Delectable Birthday Dyes
Walgreens On Nacogdoches And O'connor
Degreeworks Sbu
R/Altfeet
Directions To 401 East Chestnut Street Louisville Kentucky
Vanessa West Tripod Jeffrey Dahmer
Samantha Lyne Wikipedia
Trac Cbna
Zoe Mintz Adam Duritz
Site : Storagealamogordo.com Easy Call
Catherine Christiane Cruz
Https Paperlesspay Talx Com Boydgaming
Spn 520211
Is Windbound Multiplayer
Amazing Lash Studio Casa Linda
Talkstreamlive
Gas Buddy Prices Near Me Zip Code
Alternatieven - Acteamo - WebCatalog
Gina's Pizza Port Charlotte Fl
Aladtec Login Denver Health
Ducky Mcshweeney's Reviews
Dallas City Council Agenda
Go Upstate Mugshots Gaffney Sc
Laurin Funeral Home | Buried In Work
The Boogeyman Showtimes Near Surf Cinemas
Dynavax Technologies Corp (DVAX)
7543460065
Best Restaurant In Glendale Az
ENDOCRINOLOGY-PSR in Lewes, DE for Beebe Healthcare
Wlds Obits
15 Best Things to Do in Roseville (CA) - The Crazy Tourist
Doordash Promo Code Generator
Riverton Wyoming Craigslist
2132815089
Trivago Anaheim California
M&T Bank
Top 1,000 Girl Names for Your Baby Girl in 2024 | Pampers
Iupui Course Search
Hampton In And Suites Near Me
Tito Jackson, member of beloved pop group the Jackson 5, dies at 70
Is Chanel West Coast Pregnant Due Date
San Diego Padres Box Scores
Latest Posts
Article information

Author: Clemencia Bogisich Ret

Last Updated:

Views: 6167

Rating: 5 / 5 (60 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Clemencia Bogisich Ret

Birthday: 2001-07-17

Address: Suite 794 53887 Geri Spring, West Cristentown, KY 54855

Phone: +5934435460663

Job: Central Hospitality Director

Hobby: Yoga, Electronics, Rafting, Lockpicking, Inline skating, Puzzles, scrapbook

Introduction: My name is Clemencia Bogisich Ret, I am a super, outstanding, graceful, friendly, vast, comfortable, agreeable person who loves writing and wants to share my knowledge and understanding with you.