Saturday, 8 June 2019

AWS machine learning

https://aws.amazon.com/free/machine-learning/

SageMaker - train ML models
- Implementations of standard ML models, to train and run in one-click processes
Rekognition - image recognition
- Identify people and events in photos and videos
Lex - voice and text chatbots
Polly - text to speech
Comprehend - NLP
- topic modelling, sentiment analysis, entity extraction
Transcribe - speech to text


https://aws.amazon.com/getting-started/tutorials/detect-analyze-compare-faces-rekognition/?trk=gs_card
Rekognition identifies and locates faces.  It can identify features (like glasses or beards). It does sentiment analysis.  It does face matching, which particularly allows identifying the same person across lots of photos.

https://aws.amazon.com/getting-started/tutorials/add-voice-to-wordpress-polly/?trk=gs_card
Using Polly with Wordpress - there's a plugin, but you need to configure the IAM permissions first.

https://aws.amazon.com/getting-started/tutorials/analyze-sentiment-comprehend/?trk=gs_card
Comprehend performs sentiment analysis, entity extraction and extracts key phrases.  This can be done through an API.

https://aws.amazon.com/getting-started/tutorials/analyze-extract-metadata-video-rekognition/?trk=gs_card
Rekognition Video takes a video, and identifies its content.  It identifies objects and activities, detects and labels people, and tags celebrities.  It can also flag and categorise innappropriate content.  For all these things, it will identify the piece of video they occurred in.

https://aws.amazon.com/blogs/machine-learning/capturing-memories-geosnapshot-uses-amazon-rekognition-to-identify-athletes/
Users sign up and upload a headshot, the system then uploads photos and video from sporting events and identifies them and their bib number, so photographers don't have to manually sort photos to contact competitors.

Monday, 27 May 2019

React and reactive programming

https://dzone.com/articles/5-things-to-know-about-reactive-programming

Based on data streams, code is asynchronous, non-blocking and event-driven.
Cold streams are lazy and pull-based, hot streams are eager and push-based.
Functions should be side-effect free as far as possible, because multi-threading.
It's easy to overcomplicate, and this will make debugging impossible
Reactive systems are architectural principles: Responsive, resilient, elastic, and message-driven.  Reactive programming enables this, but it doesn't guarantee it.

https://medium.com/@kevalpatel2106/what-is-reactive-programming-da37c1611382

Reactive programming aims for responsive UIs by shifting work out of the main thread.  It's built around observables (async data objects), observers (consumers of observable data) and schedulers (which schedule works to various threads).  The (RxJava) Observer interfce specifies onNext, OnError and OnCompleted callbacks.

https://gist.github.com/staltz/868e7e9bc2a7b8c1f754

https://www.baeldung.com/spring-webflux

Spring WebFlux builds reactive REST clients and servers based on Project Reactor and Flux(stream, similar to Observable)/Mono(singleton) objects.  WebClient is the Spring reactive client.  It starts with a connection and then lets you build reactive pipelines using a fluent interface.
It's not using websockets out of the box, but can be integrated.

https://spring.io/guides/gs/reactive-rest-service/

You can also configure routing for your server-side handling using a configuration bean of type RouterFunction<T> rather than explicit controller classes.

https://docs.spring.io/spring-framework/docs/5.0.0.M1/spring-framework-reference/html/web-reactive.html

https://reactjs.org/docs/hello-world.html

React is based on building modules, which combine business logic and presentation in reuseable blocks.  JSX is a javascript extension which allows mixing of JS and HTML code in one format.  It constructs a "React DOM", which manages state and is more lightweight than the actual DOM - this does intelligent diffing to update only the parts of the DOM which actual change, which is more efficient.
A React component is a function (or class containing a function) which takes a "props" object and returns a React element (i.e. some HTML).  Components can be referenced in JSX as tags (starting with a capital letter), with tags injected into the props.
Components can store state - make them as classes, store any state in fields, and manage them using lifecycle callbacks (un/mount is adding and removing from the DOM).  It can also set up its own scheduled callbacks internally.  Calling the setter methods triggers (asychronously) the UI to rerender, so updates are reflected on the screen.  Components can pass state data into the props of child components, and the child cannot distinguish this from other props it receives.
Methods can be registered as event listeners.  There's some voodoo magic about binding "this" which should live in the constructor, because the function doesn't actually have a "this" reference otherwise.
For conditional rendering, either use a factory method with an "if", or make a class which stores the elements as fields, then return the appropriate one, or just us a JSX expression.  You can also && the element with a conditional - if the condition is true, the object renders, otherwise it doesn't.  A component can declare that it should not be rendered by returning null from its render method.
React can handle lists, but it needs list items to have an id field so it can track the identity of each entry to identify changes. The key goes on the component, not the element.

Sunday, 12 May 2019

Blog reading 2


1.       Agile isn’t just for software, or just for work
2.       Managers have a role in agile, but it’s more about strategy than micro-managing
3.       You can never rule out change entirely, but there are more and less expensive times for it
4.       You don’t need everyone to be a generalist, but multi-skilled people are at a premium
5.       Agile teams do plan, they just do it incrementally based on empirical data rather than forecasts
6.       Agile teams can architect, intentionally, making decisions incrementally rather than upfront

Estimating completion is hard, whereas done/not-done is indisputable, especially with an explicit definition of done.  Completion estimates tend to overestimate progress because trying to finish a task drives out more work.  Done/not-done gives pessimistic measures, and encourages teams to prefer smaller tasks with less WIP.  The Agile Manifesto says working software is the measure of progress, and done/not-done reinforces that.

Saturday, 4 May 2019

Blog reading

I'm trying out a plan to be more deliberate in technical reading, especially with tech blogs.  The plan is to set aside time to read blogs, taking notes and summarising what I read.  These are really only intended for my own use, and the aim of posting them here is to make it easy to refer back to things.

https://www.mountaingoatsoftware.com/blog/an-agile-team-shouldnt-finish-everything-every-iteration

Teams should aim to finish all the sprint work 80% of the time.  Aiming to finish everything every time leads to undercommitting and safety margins, especially if failing threatens consequences.  This might be lower if the team needs to respond to issues quickly.  The need for completion is driven by the business's need for predictability.  This *doesn't* mean finishing 80% of the work of the time.

https://www.mountaingoatsoftware.com/blog/when-kanban-is-the-better-choice

Teams should experiment to find the best framework for them, not prescribe solutions.  Kanban requires less managemnent buy-in and has less concepts.  It works well in immature agile environments with little flexibility.  It's ideal for small teams, or teams with large numbers of types of work that can't all be brought into a cross-functinoal team.  People over-focus on the kanban board visualisation, rather than the processes.

https://www.mountaingoatsoftware.com/blog/organizations-that-work-on-fewer-projects-at-a-time-get-more-done

Organisations typically take on large numbers of projects concurrently, and would work more efficiently if they focused on a small number at a time.  This can happen because they want to say "yes" to a project, without considering that this means deprioritising something else.

https://martinfowler.com/articles/201904-end-golden-age.html

Conference AV has gotten less user-friendly, with venues wanting to present slidedecks from their own hardware.  MF's presentation software shows timing information, previews of next slides, allows skipping sections based on the presenter's feeling of timing, and other transitions.  The controls make a difference, typically just a forward/back clicker.  Slides should be a "visual channel" which reinforces the "audio channel", not the main focus.

https://martinfowler.com/articles/domain-oriented-observability.html

DOO means instrumentation of business logic to extract business logic data, such as logging, usage metrics and analytics.  This is in addition to generic observability, but is necessarily bespoke.
This can be achieved without mixing instrumentation with business logic by creating "domain probes", facades over logging systems with interfaces expressed in domain terms.  These will make logging code more testable, and encapsulate the low-level logging systems from the codebase.  Their calls might want a request context object - this could include request info (request ID, user, timestamp, ...) and system info (version, hostname, ...) and possibly feature flags to enable A/B testing.  This could be passed in through a constructor or with a method call - it's important to isolate this from the business logic, so it doesn't depend on the contents.
Rather than having the Domain Probe make direct calls to the logging systems, it might be better to have the DP (or the business class) post events onto topics, which the logging systems consume.  Could implement it with AOP, but this is probably not a good fit for domain-specific measures, it's more suited to generic metrics.