12/5/2020
Dependency Injection Using Flexible Types and Type Inference
F# Advent 2020
This is a post for F# Advent 2020 facilitated by Sergey Tihon. Visit the link to see many more posts about F#.
Motivation
When I read Bartosz Sypytkowski's article on Dealing with complex dependency injection in F#, I knew I had to try out his method. I think his article shows a promising alternative to the "standard" dependency injection approaches you see in C# while using core F# features. This post is about my experience using what he calls an "environment parameter" for dependency injection. In short, I found the experience refreshing, and I am eager to see how the environment parameter handles changes in my application. First, I should explain why "standard" dependency injection is not enough for me.
.NET Dependency Injection is Boring and Repetitive
The dependency injection I see most often in C# (.NET Core / .NET 5) looks and feels mechanical - use interfaces and instantiate the dependencies at startup yourself or register the interfaces in some dependency injection container. Then, you find out at runtime if you, or your dependency container, missed an interface or implementation. This approach looks like the default way to encapsulate and manage dependencies in .NET with fair reasons - it sounds simple, looks unsurprising (at least before runtime), and C# tooling makes it feel natural. It is boring and repetitive.
Can F# make dependency injection less mechanical for the developer? Can the language figure out what dependencies you need based on how they are used?
If you already read Bartosz's article, you should not be surprised that I think the answer is "yes, probably". The rest of this post will assume you have not read the article, but you really should. If you do read the post, then there will be some questions that sound rhetorical. In this case, try not to roll your eyes too hard. This post is my way of comprehending Bartosz's method.
What Does F# Offer?
Advocates for F# like to mention the type system, partial application, and type inference. Partial application is a tempting approach, and it seems like an answer to my questions from the previous section. Broadly speaking, you write a function and type inference figures out the types of the arguments and return value based on usage elsewhere in the codebase.
Partial Application
Unfortunately, I do not think this is less mechanical in practice than the "standard" C# approach.
If you create and use a new dependency, you must add another field or constructor argument to services consuming the new dependency. If an existing dependency needs another capability, you will probably add another parameter and update all services that use this dependency. This feels like something the compiler and type inference can handle for us, but how do we make that happen?
Flexible Types
Refer to the F# Language Reference for Flexible Types.
This type annotation allows us to specify that a "a parameter, variable, or value has a type that is compatible with a specified type". My understanding is that this annotation combined with two interfaces is what enables F# type inference to . Why two interfaces? One interface is for methods tailored to your application logic, and the other interface is to isolate a particular choice of infrastructure (logging, database, some API). Your "environment parameter" will expose the interfaces tailored to your application core logic.
An Example
I made an internal dotnet cli tool to perform some specific tasks against my company's Stash (Bitbucket) REST API. The cli should apply certain repository permissions sourced from a settings file in a central repository. In other words, the tool supports an infrastructure as code workflow for development teams for their source code repository settings. It was a personal project with simple requirements, so I used it to try out the "environment parameter" approach.
The cli needed a few dependencies: logging, an authenticated http client, and an API to perform the necessary Stash REST API operations. Let's finally see some code trimmed down to show just the environment parameter, so no validation or Result
.
Logger Dependency
/// Application will log via these methods.
[<Interface>]
type IAppLogger =
abstract Debug: string -> unit
abstract Error: string -> unit
/// env object will use this interface
[<Interface>]
type ILog =
abstract Logger: IAppLogger
/// #ILog means env can be any type compatible with ILog interface.
/// This is the 'flexible type' annotation and where type inference
/// resolves a compatible interface - it figures out the dependency for us at compile time!
module Log =
let debug (env: #ILog) fmt =
Printf.kprintf env.Logger.Debug fmt
let error (env: #ILog) fmt =
Printf.kprintf env.Logger.Error fmt
// Adapt the dependency to IAppLogger.
// Here I am lazy and log to console, but you can use Microsoft ILogger, NLog, or whatever.
// if the logger needs configuration, I recommend making any config objects be parameters to `live`.
let live : IAppLogger =
{ new IAppLogger with
member _.Debug message = Console.WriteLine ("DEBUG: " + message)
member _.Error message = Console.WriteLine ("ERROR: " + message) }
Next, let's see how a findUser
function looks that only uses ILog
.
// val findUser:
// env : ILog ->
// searchTerm: string
// -> unit
let findUser env = fun searchTerm ->
Log.debug env "Searching for user with search term: \"%s\"" searchTerm
This function does not do anything useful, and the function signature is not surprising. This is just the usual type inference you would expect to see. We need to use another dependency to see an interesting difference in the signature.
Users API Dependency
Next, let's define the IStashUsers
and IStashApi
. If the need for the two logging interfaces was clear, then we can say the two Stash interfaces are analogous to IAppLogger
and ILog
interfaces respectively. The first is what the application logic needs, and the second is what the "flexible types" annotation uses to enable the compiler to infer the correct interface and implicitly add the dependency to the environment type definition. At least, that is how I understand it. Hopefully not wrong!
// I decided to go perhaps a little too far by isolating the serializer dependency too.
// With System.Text.Json, this may not be remotely useful anymore.
[<Interface>]
type ISerializer =
abstract Deserialize<'t> : HttpContent -> Async<'t>
abstract Serialize : 't -> string
module Serializer =
open Newtonsoft.Json
open Newtonsoft.Json.Serialization
let private settings = JsonSerializerSettings()
settings.ContractResolver <- CamelCasePropertyNamesContractResolver()
let live =
{ new ISerializer with
member _.Deserialize<'t> httpContent =
async {
let! stringContent = httpContent.ReadAsStringAsync() |> Async.AwaitTask
let deserialized = JsonConvert.DeserializeObject<'t>(stringContent, settings)
return deserialized
}
member _.Serialize toSerialize =
JsonConvert.SerializeObject(toSerialize, settings)
}
[<Interface>]
type IStashUsers =
abstract GetByUserName: string -> PageResponse<Incoming.UserDto>
[<Interface>]
type IStashApi =
abstract Users: IStashUsers
module StashUsers =
let getUserByUserName (env: #IStashApi) searchTerm =
env.Users.GetByUserName searchTerm
let live (serializer: ISerializer) stashApiUrl accessToken : IStashUsers =
{ new IStashUsers with
member _.GetByUserName userName =
async {
let! response =
FsHttp.DslCE.Builder.httpAsync {
GET (sprintf "%s/rest/api/1.0/admin/users?filter=%s" stashApiUrl (Http.encode userName))
Authorization (sprintf "Bearer %s" accessToken)
}
return! serializer.Deserialize<PageResponse<Incoming.UserDto>> response.content response
}
}
Using Two Dependencies Together
Notice how env
changed to require both ILog
and IStashApi
once findUser
uses Log.debug
and StashUsers.getUserByUserName
. Again, this type inference works because the Log
and StashUsers
modules use the #ILog
and #IStashApi
flexible type annotations respectively.
// val findUser:
// env : 'a (requires :> ILog and :> IStashApi )->
// searchTerm: string
// -> option<UserDto>
let findUser env = fun searchTerm ->
Log.debug env "Searching for user with search term: \"%s\"" searchTerm
// PageResponse<UserDto>
let x = StashUsers.getUserByUserName env searchTerm
// option<UserDto>
let user = x.Values |> Array.tryHead
Log.debug env "Best match for %s is %s" searchTerm user.Name
user
Does Environment Parameter Answer My Questions?
The questions were:
- Can F# make dependency injection less mechanical for the developer?
- Can the language figure out what dependencies you need based on how they are used?
I think the answer is yes, probably.
If I take away all uses of the Log
module from findUser
then env
type signature is only IStashApi
.
If I create a third module SomeOtherDependency
following the same two interface pattern with #ISomeOtherDependency
flexible type annotation pattern and use that module in findUser
, then env
will automatically be inferred to require the third interface. Pretty convenient!
I do not depend on some library or framework. Type inference and flexible type annotations are standard F# language features. If the environment type does not meet the needs of some function in some module, the code will not compile.
You still need to provide proper configurations, connection strings, etc at startup. The compiler does not check that, unless you are willing to add in a type provider. SQLProvider for example checks queries against a real database at compile time. Maybe there is a type provider or similar tool to do that for your configured dependency? That does not sound worth the effort and is beyond the scope of this post.
Remaining Questions
So far this post sounds like I am totally sold and have no other concerns. That is not true. I have some unanswered and untested questions.
- How to handle service lifetime and scoping, if at all?
- Can this approach be accomplished in C#?
- Perhaps by using type constraints, but I think C# would need type inference. No idea.
- Is this easier than "standard" C# Microsoft.Extensions.DependencyInjection?
- I think so, but my application is still simple compared to other codebases I work with.
Links and Contact
View the other F# Advent 2020 posts!
I would like to thank Bartosz for his post. I think it showed me a middle ground between partial application and a reader monad that I would not have found by myself.
Links:
Contact:
I do not have a comments section, so please use @garthfritz on Twitter or @garth
on the F# Software Foundation Slack (slack access requires free F# Software Foundation membership) to contact me with feedback or clarification.
12/24/2019
Using FAKE in a Build Server
F# Advent 2019
This is a post for F# Advent 2019 facilitated by Sergey Tihon. Visit the link to see many more posts about F#.
Integrating with TeamCity
This article will be TeamCity specific, but there is not much configuration needed to use FAKE.
In short, configure your build agent to run your FAKE *.fsx
script, and have your script pull in build agent variables, like nuget feeds, docker feeds, credentials, build counter, via environment variables.
Always try to write your scripts to be build server agnostic. Even isolating a build server specific dependency behind a function is better than not isolating the dependency at all.
To use FAKE, your build server needs at least one of the following on one or more of its build agents:
- install .NET SDK 2.1+ on your build agent for dotnet tool support, or
- install Docker on your build agent and specify a Dockerfile for your build agent dependencies.
Add these lines to your build script to integrate with TeamCity:
open Fake.Core
open Fake.BuildServer
BuildServer.install [ TeamCity.Installer ]
Next, modify your TeamCity configuration:
- Select Runner Type = Command Line,
- Name the step something like "Run FAKE script", or whatever you like,
- Execute step = "If all previous steps finished successfully",
- Run = "Custom Script",
- Custom Scripts =
fake build target CIBuild
,
- Format stderr output as = "error",
- Run step within Docker container = "name of the image you built from your dockerfile":
- Hopefully you have an internal docker registry to host docker images.
- Alternatively, you can choose Runner Type = "Docker" and specify the Dockerfile in your repository, but this will build the dockerfile every time.
Build Versions and Release Notes
My teammates really like this feature of FAKE. We follow the "Complex Format" per the FAKE release notes module documentation with one small difference.
RELEASE_NOTES.md:
// FAKE's complex format
## New in 1.2.1 (Released 2019/12/24)
* stuff
* and things too
// what we do instead
## 1.2.1 - 24-Dec-2019
* stuff
* and things too
The version number of the artifacts are determined from the source code. The build server only provides a number that increments on each build.
Our build numbers follow the Major.Minor.Patch.Revision
format where Major, Minor, and Patch are sourced using the Fake.ReleaseNotes
module with a RELEASE_NOTES.md
file. The Revision is the TeamCity build counter.
You can think of the build script as a function that takes in an argument for Revision
and assumes it runs in a git repository. Note that anything could provide the Revision
argument, but the build script will load that from an environment variable.
If you want to overly simplify a build script to a function, this is close-ish:
FileSystem -> DockerFeedConnection -> NugetFeedConnection -> RevisionNumber -> unit
NuGet Packages
// testTask.IfNeeded means THIS task should run after
let nugetPackTask = BuildTask.create "Artifact" [ testTask.IfNeeded ] {
let nugetPackDefaults = fun (options : NuGet.NuGetParams) ->
// tool path is by default ./tools/ or you can change it with Tools = "/path/to/nuget.exe"
{ options with
Publish = true
PublishUrl = "https://artifacts.company.com/api/nuget/v3/"
// https://fake.build/dotnet-nuget.html#Creating-a-nuspec-template
// replace placeholders in .nuspec with `NuGetParams` record field
Version = EV.version()
Authors = authors
Summary = "A super cool dotnet core application."
Description = "A longer description about this super cool dotnet core application."
ReleaseNotes = release().Notes |> String.toLines
// FS0052 workaround (ugly: let x = ... in x); this is a shorthand to make an intermediate value
Copyright = sprintf "Your Company %i" (let now = System.DateTime.UtcNow in now.Year)
Tags = "C#;F#;FAKE;"
Files = [ // projects deploying to kubernetes should insert their own yml file,
// but these files should always be packaged
"fake.cmd", Some "content", None
"fake.sh", Some "content", None
"deploy.fsx", Some "content", None
"paket.dependencies", Some "content", None
"paket.lock", Some "content", None ]
// set paths for NuGet
OutputPath = artifactOutDir
WorkingDir = buildOutDir
BasePath = Some root }
let packApi () =
// take the nuget pack defaults and apply API specific nuget pack settings
NuGet.NuGet (nugetPackDefaults >> ApiProject.nugetPackSettings) ".nuspec"
// now pack them all (could async parallel this later)
packApi ()
}
If you noticed ApiProject.nugetPackSettings
, I like to put all functions, values, paths, and names specific for a project into a project specific module in the build script.
Docker Images
//
// Helpers
//
/// Look for the specified `tool` on the Environment's PATH and in `otherSearchFolders`.
/// - `tool` : name of the tool on a *nix system
/// - `winTool` : name of the executable on a windows system
let platformTool tool winTool otherSearchFolders =
let tool = if Environment.isLinux then tool else winTool
tool
|> ProcessUtils.tryFindFileOnPath
|> function
| Some pathTool -> pathTool
| None ->
if Seq.isEmpty otherSearchFolders then
failwithf "platformTool %s not found" tool
else
ProcessUtils.tryFindFile otherSearchFolders tool
|> function
| Some folderTool -> folderTool
| None -> failwithf "folderTool %s not found in folders %A" tool otherSearchFolders
let dockerTool =
// you should have it installed on your development machine
// we assume docker is included in the build agent path too
platformTool "docker" "docker.exe" Seq.empty
let buildDocker repositoryUrl tag =
let args = sprintf "build -t %s ." (repositoryUrl </> tag)
runTool "docker" args "."
let pushDocker repositoryUrl tag =
let args = sprintf "push %s" (repositoryUrl </> tag)
runTool "docker" args "."
let dockerUser = "yourcompany-user"
let dockerImageName = "yourcompany-api"
let dockerFullName = sprintf "%s/%s:%s" dockerUser dockerImageName (EV.buildVersion())
let dockerBuildTask = BuildTask.create "DockerBuild" [] {
buildDocker Docker.repositoryUrl dockerFullName
}
// publish the docker image
let dockerBuildTask = BuildTask.create "DockerPush" [dockerBuildTask] {
pushDocker Docker.repositoryUrl dockerFullName
}
Stringly vs Strongly Typed Build Targets
Stringly Typed
FAKE by default has you define build targets like so:
open Fake.Core
Target.initializeEnvironment()
// define targets
Target.create "Test" (fun _ ->
// run dotnet test, or whatever
)
Target.create "Publish" (fun _ ->
// run dotnet publish
)
Target.create "Default" (fun _ ->
// an empty task for default build behavior on a developer machine
)
Target.create "CI" (fun _ ->
// an empty task for the CI server to enter the CI specific build target ordering
)
// define ordering
"Test"
==> "Default"
"Default"
==> "Publish"
==> "CI"
// if you run `fake build`, then "Default" will be the starting target
Target.runOrDefault "Default"
Strongly Typed
vbfox created a FAKE 5 module for strongly-typed targets that allows scripts to define let-bound values that represent build tasks, and the compiler will be able to check the usage of those targets like any other normal value.
I use BlackFox.Fake
, but I miss the summary-like expression listing the order of build targets. For example:
//// Fake.Core.Target
// define targets
Target.create "Clean" ()
Target.create "Test" ()
Target.create "Publish" ()
Target.create "CI" ()
// define ordering
"Clean"
==> "Test"
==> "Publish"
"Publish"
==> "CI"
//// BlackFox.Fake.BuildTask
let cleanTask = BuildTask.create "Clean" [] { (* *) }
let testTask = BuildTask.create "Test" [clean.IfNeeded] { (* *) }
let publishTask = BuildTask.create "Publish" [testTask] { (* *) }
let ciTask = BuildTask.create "CI" [publishTask] { (* *) }
I do not have a clear preference or advice on what to choose over the other. I suggest trying for yourself. My day-to-day build target order is not complicated enough to show a clear difference.
Creating Octopus Releases
If you use something other than Octopus, chances are your deployment server has a REST API to create and deploy releases.
let projectName = "Some Service"
module DeploymentServer =
module private EnvironmentVariables =
let server = Environment.environVar "Octopus-Server"
let apiKey = Environment.environVar "Octopus-TeamCityAPIKey"
[<AutoOpen>]
module private Helpers =
// when Fake.Tools.Octo nuget package works with dotnet tool Octopus.DotNet.Cli, use Fake.Octo instead
let octoTool cmd args =
dotnetTool (sprintf "octo %s %s --server=%s --apikey=%s" cmd args EnvironmentVariables.server EnvironmentVariables.apiKey) "."
let private createReleaseArgs =
// Using triple quotes to allow for quote characters in the format string, also could have escaped with backslash.
// Re-use your release notes so you see them in the octopus release screen.
sprintf """--package=*:%s --project="%s" --version="%s" --releasenotesfile="%s" """ buildNumber projectName buildNumber releaseNotesFile
/// Creates a release in Octopus for this build
let createRelease _ =
// dotnet tool update will: 1. install if not installed, 2. same version installed, reinstall it, 3. update lower version to current version
// This is nice because we do not have to check if the tool is already installed and conditionally NOT run `dotnet tool install` if it is. Install fails if the tool is already installed.
// https://github.com/dotnet/cli/pull/10205#issuecomment-506847148
dotnetTool "tool update -g Octopus.DotNet.Cli" "."
octoTool "create-release" createReleaseArgs
// make sure when this task runs that any nuget packages, docker images, etc. are already published
BuildTask.create "CreateRelease" [yourNugetPublishTask; yourDockerPublishTask] {
DeploymentServer.createRelease
}
VS Code Dev Containers
A good way to shorten the feedback loop on your Dockerfile defining your build dependencies is to use that Dockerfile locally. VS Code's Dev Container feature makes that really easy provided you have Docker and VS Code installed.
I have two unsolved-by-me, but manageable, problems with this approach:
- .fake/ cache sometimes picks up as "invalid" so I have to purge the directory and download dependencies again
- paket-files/ sometimes experiences the same behavior as .fake/
I may have done something wrong with my Dockerfile/fake/paket combination. I have not investigated much because this problem does not happen often enough to waste time.
##
## want dotnet-sdk to use dotnet-tool and run the build script with dotnet-fake
##
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-alpine
RUN apk update
# add dotnet tools to path to pick up fake and paket installation
ENV PATH="${PATH}:/root/.dotnet/tools"
# install dotnet tools for fake, paket, octopus
RUN dotnet tool install -g fake-cli \
&& dotnet tool install -g paket \
# https://octopus.com/docs/octopus-rest-api/octo.exe-command-line/install-global-tool
&& dotnet tool install -g Octopus.DotNet.Cli \
# install yarn
&& apk add yarn \
# install docker cli; note the build server will have to provide the actual docker engine
&& apk add docker \
# other tools expected by build.*.fsx scripts
&& apk add git curl
# bring in the build scripts and build script dependencies files
COPY build.standalone.fsx build.webcomponents.fsx paket.dependencies paket.lock /var/app/
COPY .paket /var/app/.paket/WORKDIR /var/app
I publish this image to our docker registry my teammates and the build server do not need to rebuild the image every time.
FAKE and Build Servers
Try to write build scripts to be build server agnostic.
While we do not change our build server, we gain the ability to treat our build process as just another segment of code to branch, peer review, and run. I think this is much easier than using pre-defined steps and templates defined in your build server of choice.
Links, Inspiration, and Contact
View the other F# Advent 2019 posts!
Links:
Inspiration:
I often reviewed these repositories to see how they used FAKE.
Contact:
I do not have a comments section, so please use @garthfritz on Twitter or @garth
on the F# Software Foundation Slack to contact me with feedback or clarification.
12/13/2018
This is a post for F# Advent 2018.
This post will describe the things that got my team hooked on FAKE - an F# DSL for Build Tasks and more. This will be more narrative and opinions than F# code. Sorry.
"If a company says they are a ".NET shop", what they really mean is "we are a C# shop, .NET is C#, stop talking to me about F#".
— Me, ranting in my 1 on 1 meetings with my manager
I have been pushing F# on my coworkers since I started in June 2017. Lots of things got thrown onto the wall, and the things that actually shipped were one project with JSON and WSDL Type Providers and, yesterday, a project built completely by FAKE (also has JSON and CSV Type Providers).
Disclaimer, these are opinions and are listed in no particular order. If you have any feedback, need some clarification, or want to tell me I'm completely wrong, the best place to start will be Twitter.
Things My Team Liked About FAKE
Feel Like a Command Line Wizard Again
If using FAKE makes developers have fun scripting building, testing, and packaging processes, then that is a win all by itself. Bonus points if it makes them feel like a cool kid.
The FAKE dotnet global tool helps with that too.
Freedom to Script as Much of Build and Deploy as You Want
- the way you build locally is how the build server builds
We have the build server - TeamCity but it could just as well be another - provide the full build number and move our build artifacts to our internal package feed. Everything else is done in the script.
A developer can try different build configurations locally without messing up the project build configuration on the build server. Most of the benefits under this reason are the same benefits as putting any other code into source control.
The biggest win is how short the feedback cycle is for building. How quickly can you debug a build error with a particular TeamCity build step? Probably not as fast as you could on your own machine. Don't you normally remote or ssh into the problem build agent if the error log doesn't make sense anyway?
FAKE Features Make Annoying Things Easy
I have my favorite FAKE features, but these are the top ones according to my newly converted team.
- Super easy templating of .nuspec parameters
We apply the same NuGet package attributes to every assembly, so it was really easy to just let FAKE do that for us. All you have to do is substitute the values you care about and the NuGet required minimum fields.
Example customizing FAKE's default nuspec.
- Release Notes automatically pulled from the latest version in the Release Notes file
I don't think any of our projects publish developer written release notes, but FAKE makes it easy to publish them in the NuGet package Release Notes field. I think release notes from the developer are a good idea.
FAKE's ReleaseNotes Sample
"I still don't love functional or F# for my day-to-day work, but I'll be damned if FAKE and Type Providers aren't my favorite things right now."
Things My Team (and others) Did Not Like About FAKE
I will use the following pattern to list the concerns:
- the problem/concern someone has
- my not necessarily nuanced retort
Here we go:
- Syntax is jarring (aka syntax shock).
- I think you mean "is not C# syntax". Well so is HTML, CSS, SQL, JavaScript, Powershell, Bash, but you can do all of those!
- Who will train and help other people to be familiar with F# if this becomes standard?
- Can't you just do all of this stuff in TeamCity and Octopus already? That's why we bought it.
- Sounds like sunk cost fallacy to me.
- If you want finer grained control over your build, I don't think that canned TeamCity steps will are enough.
- I think FAKE's Target Dependency Ordering is more powerful and developer-friendly than standing up multiple TeamCity build configurations.
- Isn't writing code a big part of your job? Why do you prefer clicking and dragging boxes in a TeamCity/Octopus screen over writing code?
How Did I Do It?
I tested out FAKE near the end of its FAKE 4 lifetime. Once FAKE updated to version 5 I tried to script the build for one of our big legacy applications. I did not get very far. It was way too much process to replace at once, and I could not present F# or FAKE in a good light with a partially migrated build.
Fortunately, I found an NDC talk Immutable application deployments with F# Make - Nikolai Norman Andersen and Nikolai's sample weather-in-harstad repository which put me on the path of making a coherent argument and demo build script for the team. I encourage you to watch Nikolai's talk in full. I'll even repeat the link at the end.
Some weeks later, we start two greenfield projects - one large in scope and one small. Here's the "secret" way I got FAKE into the build - I just did it. F# first, ask questions (or forgiveness) later, except this time it worked.
Future Work
Due to priorities changing frequently, we have not had time to use FAKE to script our deploy process and post-deployment smoke testing. The team and I still really want to do that, but time constraints unfortunately make it smarter to just let Octopus do it's job.
Other than time constraints, I want to do some preparation work to confidently demo a solid FAKE deploy script to the team.
-
How should I pull out all of the non-sensitive variables out of Octopus and into the FAKE script?
-
Same as #1 but for the sensitive variables (API keys, Production level credentials, etc.)?
Nikolai demonstrated using git-secret to accomplish it, but he was hesitant to recommend it. So that's why I need to research it more.
-
How do I safely and unobtrusively transform all of the former Octopus variables to their environment specific values?
I don't think anyone likes having pages and pages of Octopus variables. I am certain FAKE can provide an elegant alternative. I just need to work on it more.
-
How can I make #1-3 easy for the rest of the team to maintain and develop?
-
How do I reliably share any bespoke deployment tasks we make with other teams via Octopus?
If any of these problems sound really easy to you or you have already solved them using FAKE, please let me know!
You should watch Immutable application deployments with F# Make - Nikolai Norman Andersen.
2/22/2017
If you try to NULL
the value of an Entity field and save that change with the CRM OrganizationServiceContext, you need to be careful how you set that NULL value. If the field to NULL is not in the myEntity.Attributes
collection, then it will not be updated when the service call updates the record in CRM.
We can demonstrate this by initializing the early bound Account entity in a few different ways and inspecting the Attribute collection. The field to clear in this example will be ParentAccountId
.
First, we will use the constructor then NULL with dot notation.
Second, we use the object initializer syntax and null with that.
Third, we initialize only the ID field and use dot notation to set the field NULL.
These attempts will not put ParentAcountId into the Attributes collection. Two ways that will work are setting the field to NULL with the late bound class, and initializing or setting ParentAccountId with a dummy non-NULL value in the early bound class then setting the field to NULL.
This test class will demonstrate each of these approaches.
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Microsoft.Xrm.Sdk;
using MyXRM.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
namespace MyXRM.Tests
{
[TestClass]
public class AttributeCollectionTest
{
[TestMethod]
public void TestWhatAddsAttributeToCollection()
{
// set up the account to reference
var myAccountInCRM = new Account();
myAccountInCRM.Id = Guid.NewGuid();
myAccountInCRM.Name = "Test Name";
myAccountInCRM.ParentAccountId = new EntityReference(Account.EntityLogicalName, Guid.NewGuid());
Assert.IsTrue(myAccountInCRM.Attributes.ContainsKey("parentaccountid"));
// list of pass/fails for each attempt
var results = new List<bool>();
/*
* Below are different ways of initializing the early bound Account entity
* with the field we want to clear.
*/
//
var withConstructor = new Account();
withConstructor.Id = myAccountInCRM.Id;
withConstructor.ParentAccountId = null;
results.Add(withConstructor.Attributes.ContainsKey("parentaccountid"));
var withInitializer = new Account
{
Id = myAccountInCRM.Id,
ParentAccountId = null
};
results.Add(withInitializer.Attributes.ContainsKey("parentaccountid"));
var withInitializedId_ThenUpdate = new Account
{
Id = myAccountInCRM.Id
};
withInitializedId_ThenUpdate.ParentAccountId = null;
results.Add(withInitializedId_ThenUpdate.Attributes.ContainsKey("parentaccountid"));
var lateBound = new Entity
{
Id = myAccountInCRM.Id
};
lateBound["parentaccountid"] = null;
results.Add(lateBound.Attributes.ContainsKey("parentaccountid"));
var withInitializer_ActuallyClearsField = new Account
{
Id = myAccountInCRM.Id,
ParentAccountId = new EntityReference()
};
withInitializer_ActuallyClearsField.ParentAccountId = null;
results.Add(withInitializer_ActuallyClearsField.Attributes.ContainsKey("parentaccountid"));
Console.WriteLine("Test Results: {0}",
String.Join(",", results));
Assert.IsTrue(results.Count(x => x == true) == 2,
"Only two of these cases should have passed.");
Assert.IsTrue(results[results.Count - 2],
"The late bound example should have been true");
Assert.IsTrue(results.Last(),
"The last result should have been true in this demo.");
}
}
}
In this case, using the late bound entity is more straightforward than using the early bound entity. With late bound you will not get intellisense so make sure you have the correct spelling and casing for your field. You can find the correct string to use in your early bound entities file by hitting F12 on the early bound field and inspecting the method decorator. For our field, we use the string in here: [Microsoft.Xrm.Sdk.AttributeLogicalNameAttribute("parentaccountid")]
.
If you use the Early Bound Generator tool from the XRM Toolbox, one particularly useful thing it does is enumerates each attribute name as a struct of strings. That provides intellisense and the correctly cased string name of the field.
Initializing an early bound entity field with NULL looks like code that should work, but chances are you only notice the problem when the update does not clear that field in CRM. You could just as easily do earlyBoundAccount["parentaccountid"] = null;
, but why would that be your first choice when you have early bound classes?
You might consider a wrapper class to handle this NULL setting logic for you, or probably simpler still an extension method SetToNull(myAccount, "nameOfFieldToClear")
so you can use this for all entities. Remember to use the Fields
struct if you use the Early Bound Generator to create your early bound classes - SetToNull(myAccount, Account.Fields.NameOfFieldToClear)
.
8/22/2016
Summary
Problem:
If an entity record is missing required fields, you get an error when trying to deactivate the record from the form.
Solution Summary
I assume you know how to use RibbonWorkbench to edit entity ribbons so I gloss over the setup specifics. Review the Getting Started Guide at the author's website and the CRM 2016 RibbonWorkbench beta announcement post for more information about Ribbon Workbench.
- Open a solution containing the entities you want to fix in Ribbon Workbench.
- Add a Custom Javascript Action above the existing Custom Javascript Action. Our new action must execute first.
- Have the action call a function that does the following:
- Remove the required level from all form fields then return. This must be synchronous code because the next Action will execute immediately after the first action returns. It should remember which fields were required if you want to restore them after
statecode
changes.
- (optional) add an OnChange event to the
statecode
attribute (make sure this is on the form) to restore the required level to the correct attributes.
- Publish the solution from Ribbon Workbench.
Solution/Workaround, Longer Form
In CRM 2016, and similarly for others in 2013+, we ran into an odd error around deactivating Accounts and Contacts from their forms. This likely can happen on any record having a Deactivate button. If a Contact record is missing a required field denoted by a red asterisk (*), then clicking the Deactivate button and completing the popup window by clicking OK, you get a not so helpful error message:
The obscured window is the "Confirm Deactivation" CRM lightbox.
If you fill in the required fields and try again (with or without saving the form), then the Deactivate button click works. Deactivating the record from a homepage grid or subgrid works regardless of the required fields. The grid approach does not need required fields to be filled. Why does the form need it? Since the required fields were the apparent blockers, I thought the button was changing the statecode and statuscode fields, saving the form, and failing because you can't save the form when required fields are empty. We have to see how the Deactivate button works, and I used Ribbon Workbench for CRM 2016 (beta) to see the function name I need to find.
The bottom right Custom Javascript Action is what an uncustomized Deactivate Button command does when clicked. Ignore the action above it for now - it is the workaround I will describe later.
The RibbonWorkbench showed me the library and function the Deactivate button calls - CommandBarActions.js
and Mscrm.CommandBarActions.changeState
. If I am on the Account form, the button calls Mscrm.CommandBarActions.changeState("deactivate", "{my-account-guid}", "account")
. At the end of this post is the code that I followed while trying to mentally trace what happens when Deactivate is clicked in our scenario. It is not the full CommandBarActions.js
file. I do not find a definitive answer, but if you want to read the optional ramblings, follow the comments from top to bottom in this code block. It is suffice to know that empty required fields are the root of the problem that we can fix.
I think this is a bug in CRM 2016 forms, but we can work around it in a supported way. I wonder why the form does not do a specialized UpdateRequest (fancy name for "just update the statecode and statuscode in the UpdateRequest") through REST or WebApi? It might be on a backlog somewhere.
Check the top right Custom Javascript Action again. Notice the Custom Javascript Action called deactivateFromFormWorkaround
taking PrimaryEntityTypeName
as a parameter. This will temporarily remove the required level from required fields so deactivating from the form will complete.
// Remove Required Level from Fields so Deactivate Works on CRM 2016 form, then restore after the statecode changes
// XrmCommon.removeOnChange and XrmCommon.addOnChange call the same Xrm.Page methods but check if the field exists on the form first.
// CommandProperties is always passed as the first parameter in Ribbon Button Actions
function deactivateFromFormWorkaround(CommandProperties, PrimaryEntityTypeName) {
var restoreRequiredFields = function (context) {
XrmCommon.undoRemoveRequiredLevel();
XrmCommon.removeOnChange("statecode", restoreRequiredFields);
};
var permittedEntities = ["account", "contact"];
if (permittedEntities.indexOf(PrimaryEntityTypeName) === -1) {
console.error(PrimaryEntityTypeName + " is not supported for this Deactivate button workaround.");
}
XrmCommon.removeRequiredLevel();
XrmCommon.addOnChange("statecode", restoreRequiredFields);
}
// XrmCommon is normally in another js file, so I'm adding just the relevant code to this gist.
var XrmCommon = XrmCommon || {};
XrmCommon._requiredFields = [];
XrmCommon.removeRequiredLevel = function () {
/// <summary>Removes required level from all required fields</summary>
Xrm.Page.getAttribute(function (attribute, index) {
if (attribute.getRequiredLevel() == "required") {
attribute.setRequiredLevel(XrmCommon.CONSTANTS.FORM_REQUIRED_LEVEL_NONE);
XrmCommon._requiredFields.push(attribute.getName());
}
});
}
XrmCommon.undoRemoveRequiredLevel = function () {
if (XrmCommon._requiredFields.length == 0) {
_xrmCommonConsoleWarning("Nonsensical call to XrmCommon.undoRemoveRequiredLevel without calling XrmCommon.removeRequiredLevel first");
}
else {
var affectedFieldNames = XrmCommon._requiredFields;
for (var name in affectedFieldNames) {
XrmCommon.setFieldRequirementLevel(affectedFieldNames[name], XrmCommon.CONSTANTS.FORM_REQUIRED_LEVEL_REQUIRED);
}
XrmCommon._requiredFields.length = 0;
}
}
XrmCommon.CONSTANTS = {
FORM_REQUIRED_LEVEL_NONE: "none",
FORM_REQUIRED_LEVEL_RECOMMENDED: "recommended",
FORM_REQUIRED_LEVEL_REQUIRED: "required"
};
This code could have instead done a Metadata query to retrieve which fields are required for this form. The SDK javascript libraries do asynchronous calls, and you can modify the functions to add a parameter to make them synchronous calls if you want. I think the presented approach is simpler and definitely less code. You do not have to restore the required levels as it is just a cleanup step.
One problem with this approach is if the user cancels the Deactivate confirmation, then the formerly required fields will still be not required.
That's it! Hopefully updates to CRM fix this weird behavior.
CRM Javascript and Ramblings
This is the code block referenced above.
// SUMMARY if you don't want to read the whole thing
// If this branch is followed and does the return "if (!Xrm.Page.data.getIsValid()) return;",
// then I think the "please try again" popup happens because "Xrm.Page.data.save($v_5).then($v_0, $v_1)" has a problem.
// Otherwise, I think the "please try again" popup happens because getIsValid makes this command return earlier than expected
// I find the specific message defined as the global variable LOCID_IPADWINCLOSED,
// but I don't find how calling Mscrm.CommandBarActions.changeState() directly from the ribbon in this scenario throws that message.
// clicking on Account form calls: Mscrm.CommandBarActions.changeState("deactivate", "{my-account-guid}", "Account")
Mscrm.CommandBarActions.changeState = function(action, entityId, entityName) {
Mscrm.CommandBarActions.handleStateChangeAction(action, entityId, entityName)
};
Mscrm.CommandBarActions.handleStateChangeAction = function(action, entityId, entityName) {
var $v_0 = null;
if (Mscrm.CommandBarActions.isWebClient() || Xrm.Page.context.client.getClient() === "Outlook") {
$v_0 = new Xrm.DialogOptions;
$v_0.height = 230;
$v_0.width = 600
}
// entityName = "account" makes this if guard false,
if (Mscrm.InternalUtilities.DialogUtility.isMDDConverted(action, entityName)) {
var $v_1 = new Microsoft.Crm.Client.Core.Storage.Common.ObjectModel.EntityReference(entityName, new Microsoft.Crm.Client.Core.Framework.Guid(entityId)),
$v_2 = [$v_1],
$v_3 = {};
$v_3["records"] = Mscrm.InternalUtilities.DialogUtility.serializeSdkEntityReferences($v_2);
$v_3["action"] = action;
$v_3["lastButtonClicked"] = "";
$v_3["state_id"] = -1;
$v_3["status_id"] = -1;
Xrm.Dialog.openDialog("SetStateDialog", $v_0, $v_3, Mscrm.CommandBarActions.closeSetStateDialogCallback, null)
} else {
$v_0.height = 250;
$v_0.width = 420;
var $v_4 = Xrm.Internal.getEntityCode(entityName),
$v_5 = Mscrm.GridCommandActions.$L(action, $v_4, 1);
$v_5.get_query()["iObjType"] = $v_4;
$v_5.get_query()["iTotal"] = "1";
$v_5.get_query()["sIds"] = entityId;
$v_5.get_query()["confirmMode"] = "1";
var $v_6 = [action, entityId, entityName],
$v_7 = Mscrm.CommandBarActions.createCallbackFunctionFactory(Mscrm.CommandBarActions.performActionAfterChangeStateWeb, $v_6);
// $v_6 is the args array to performActionAfterChangeStateWeb, so now check what that function does
// when $v_6 = ["deactivate", "{my-account-guid}", "account"]
Xrm.Internal.openDialog($v_5.toString(), $v_0, [entityId], null, $v_7)
}
};
Mscrm.InternalUtilities.DialogUtility.isMDDConverted = function(action, entityName) {
switch (action) {
case "activate":
switch (entityName) {
case "audit":
case "campaignresponse":
case "channelaccessprofilerule":
case "contract":
case "service":
case "sla":
case "systemuser":
case "workflow":
return false
}
break;
case "deactivatecampactivity":
return false;
case "deactivate":
switch (entityName) {
case "audit":
case "campaignresponse":
case "channelaccessprofilerule":
case "contract":
case "service":
case "sla":
case "systemuser":
case "workflow":
return false
}
break;
case "delete":
switch (entityName) {
case "audit":
case "service":
case "workflow":
case "hierarchyrule":
return false
}
break;
case "converttoopportunity":
switch (entityName) {
case "serviceappointment":
return false
}
break;
case "converttocase":
switch (entityName) {
case "serviceappointment":
return false
}
break;
case "assign":
switch (entityName) {
case "connection":
case "duplicaterule":
case "emailserverprofile":
case "goal":
case "goalrollupquery":
case "importmap":
case "mailbox":
case "mailmergetemplate":
case "postfollow":
case "queue":
case "report":
case "serviceappointment":
case "sharepointdocumentlocation":
case "sharepointsite":
case "workflow":
return false
}
break
}
return true
};
Mscrm.CommandBarActions.createCallbackFunctionFactory = function(func, parameters) {
return function(retValue) {
parameters.unshift(retValue);
return func.apply(null, parameters)
}
};
Mscrm.CommandBarActions.performActionAfterChangeStateWeb = function(returnInfo, action, entityId, entityName) {
var $v_0 = -1,
$v_1 = 0;
if (!Mscrm.InternalUtilities.JSTypes.isNull(returnInfo) && returnInfo) {
var $v_2 = returnInfo;
// $1U is a parseInt wrapper, so I'm not including it
$v_0 = Mscrm.CommandBarActions.$1U($v_2["iStatusCode"]);
$v_1 = Mscrm.CommandBarActions.$1U($v_2["iStateCode"]);
// performActionAfterStateChange("deactivate", "{my-account-guid}", "account", newStateCodeFromDeactivateDialog, newStatusCodeFromDeactivateDialog, probablyReturnObject)
Mscrm.CommandBarActions.performActionAfterStateChange(action, entityId, entityName, $v_1, $v_0, $v_2)
}
};
Mscrm.CommandBarActions.performActionAfterStateChange = function(action, entityId, entityName, stateCode, statusCode, result) {
var $v_0 = 0;
switch (entityName) {
//
case "account":
case "contact":
case "pricelevel":
case "recommendationmodel":
case "systemuser":
case "topicmodel":
case "knowledgesearchmodel":
if (action === "activate") {
stateCode = 0;
Xrm.Page.context.saveMode = 6
} else if (action === "deactivate") {
stateCode = 1;
// this is our entityName and action
// but I don't know what saveMode = 5 does when required fields are empty
// doesn't seem to do anything different when run in the console... moving down
Xrm.Page.context.saveMode = 5
}
break;
case "entitlement":
if (action === "activate") stateCode = 1;
else if (action === "deactivate") stateCode = 0;
break;
case "campaignactivity":
if (action === "deactivatecampactivity") {
$v_0 = 5;
var $v_1 = new Mscrm.CampaignActivityStateHandler;
$v_1.setDates(result["iStartDate"], result["iEndDate"]);
$v_1.updateState()
}
break
}
if (action === "activate") $v_0 = 6;
else if (action === "deactivate") $v_0 = 5;
Xrm.Page.context.saveMode = $v_0;
// setState calls $14 so it's a non-trivial enough wrapper to include here
Mscrm.CommandBarActions.setState(entityId, entityName, stateCode, statusCode)
};
Mscrm.CommandBarActions.setState = function(entityId, entityName, stateCode, statusCode, closeWindow, entityToOpen, entityIdToOpen) {
if (Mscrm.InternalUtilities.JSTypes.isNull(Xrm.Page.data.entity.getId())) return;
// getIsValid is not documented, so I can't assume it checks required fields are filled, but I _think_ it does...
// but this seems like a controlled return and not something that would make the popup "an error has occurrend please go the the homepage and try again"
// I can't find the source of getIsValid() so I assume it returns true if required fields are empty
if (!Xrm.Page.data.getIsValid()) return;
// I think this is CRM trying to match your chosen statusCode to a stateCode
// I assume the Confirm Deactivation lightbox picks only the StatusCode
// either way, $14 still gets called so I don't think I need to include this setState function to read through
// now look at $14
if (typeof statusCode === "undefined") statusCode = -1;
else if (stateCode === -1) {
Xrm.Internal.getStateCodeFromStatusOption(entityName, statusCode).then(function($p1_0) {
stateCode = $p1_0;
Mscrm.CommandBarActions.$14(entityId, entityName, stateCode, statusCode, closeWindow, entityToOpen, entityIdToOpen)
}, function() {
Mscrm.CommandBarActions.$14(entityId, entityName, stateCode, statusCode, closeWindow, entityToOpen, entityIdToOpen)
});
return
}
Mscrm.CommandBarActions.$14(entityId, entityName, stateCode, statusCode, closeWindow, entityToOpen, entityIdToOpen)
};
// I think the dive finally ends here
// $v_0 seems to be the actual deactivate via Xrm.Internal.messages.setState()
Mscrm.CommandBarActions.$14 = function($p0, $p1, $p2, $p3, $p4, $p5, $p6) {
var $v_0 = function($p1_0) {
if (!$p0 || !$p0.length) $p0 = Xrm.Page.data.entity.getId();
// if I'm on the web on a form, then I think this if guard is false
// so we go to the else branch!
if (Xrm.Utility.isMocaOffline()) {
var $v_2 = new Microsoft.Crm.Client.Core.Storage.Common.ObjectModel.EntityReference($p1, new Microsoft.Crm.Client.Core.Framework.Guid($p0)),
$v_3 = new Microsoft.Crm.Client.Core.Storage.DataApi.Requests.SetStateRequest($v_2, $p2, $p3, true),
$v_4 = function() {
Mscrm.CommandBarActions.$1q($p0, $p1, $p4, $p5, $p6)
};
Xrm.Utility.executeNonCudCommand("SetState", $p1, $v_3, $v_4, Mscrm.InternalUtilities.ClientApiUtility.actionFailedCallback)
// looks like Xrm.Internal.messages.setState is a promise function
// I assume setState works fine, but $1q tries to figure out what to do with the UI after the promise completes successfully
// ALTHOUGH, $v_0 does not even get called until the form saves successfully... so lets go to Xrm.Page.data.save($v_5)
} else Xrm.Internal.messages.setState($p1, $p0, $p2, $p3).then(function($p2_0) {
Mscrm.CommandBarActions.$1q($p0, $p1, $p4, $p5, $p6)
}, function($p2_0) {
Mscrm.CommandBarActions.$O = false;
Mscrm.InternalUtilities.ClientApiUtility.actionFailedCallback($p2_0)
})
},
$v_1 = function($p1_0) {
Mscrm.CommandBarActions.$O = false
};
if (!Mscrm.CommandBarActions.$O) {
Mscrm.CommandBarActions.$O = true;
var $v_5 = new Xrm.SaveOptions;
$v_5.useSchedulingEngine = false;
// I don't see why but maybe the save throws an error? Otherwise, it might actually be .getIsValid returning early
// that makes the message throw.
Xrm.Page.data.save($v_5).then($v_0, $v_1)
}
};
8/19/2016
Requirement
Replace freetext Assistant fields on the Contact form with a referential 1:N Contact relationship lookup field. Freetext fields for Assistant are not convenient enough.
Summary of Steps
- Create a new Contact lookup field, refrential relationship type.
- Notice that the default set of field mappings will default the Assistant field to the Contact you are creating the Assistant from.
- Register and fire an OnChange event in the Quick Create form OnLoad event to check the Assistant GUID against the CreateFromId from QueryStringParameters.
- If the GUIDs match, then you know this form is being used to create an Assistant.
- Add JavaScript to the Assistant OnChange that runs when the GUIDs match to do other Assistant-only defaulting that the Relationship Mappings cannot do (or if non-mapped fields should not be changed by the user, then register a Pre-Create of Contact event plugin to set them instead of this JavaScript).
Walkthrough
When creating a new lookup field, you have to create a new relationship. CRM will generate a set of default mappings for new relationships. We wanted to use a lookup to another Contact record instead of using the freetext fields "Assistant", "Assistant Email", and "Assistant Phone" on a Contact.
In our example, we have created a 1:N relationship from Contact to Contact for a field called Assistant. Any fields in the mappings list will populate onto the Create form for you. If you want to use this defaulting in plugin code, you have to use the InitializeFromRequest or WebApi InitializeFrom function. One of the automatic field mappings defaults the Assistant field of the new record to the Contact from which it is being created. This is a bad default because it is a circular reference, and CRM complains about it when you try to save the form. If Assistant was not on the create form, this probably would not happen. Our requirement specifically wants the field on the form though.
We enabled the Contact entity for Quick Create forms, and the Assistant field is on the Quick Create form. CRM does not let you delete or modify that relationship mapping so we have to do some client-side validation. Add an OnChange event to the Assistant field and fire it in the OnLoad event of the Quick Create form. The relationship populates the fields before OnLoad fires. The OnChange event should clear the Assistant field if the GUID of the Assistant lookup matches the GUID of the source Contact record. In a Quick Create form launched from a lookup field, you can get the source GUID from Xrm.Page.context.getQueryStringParameters()._CreateFromId
.
If I am creating an Assistant for my Contact named "test, testington", then this is what the QueryStringParameters return object looks like. Hit F12 when the Quick Create form is open then type that code - frames[0]
or frames[1]
might be necessary if the Xrm.Page
object is sort of empty.
// JavaScript for setting defaults on a Quick Create form launched from a new Assistant lookup field
function quickCreateStartLoad() {
if (XrmCommon.getFormType() === XrmCommon.CONSTANTS.FORM_TYPE_CREATE) {
XrmCommon.addOnChange("tmr_assistant", assistantOnChange);
XrmCommon.fireOnChange("tmr_assistant");
}
else{
throw new Error("quickCreateStartLoad should only be used on Create forms. Fix that customization!");
}
}
function assistantOnChange() {
//
// The relationship mapping defaults tmr_Assistant to the contact this quick create form was launched from.
// Stop that because it creates a circular reference!
// The mapping cannot be deleted or modified in the solution so this is a workaround.
var lookupValue = XrmCommon.getFieldValue("tmr_assistant");
if (lookupValue && lookupValue.length > 0 && lookupValue[0]) {
var selection = lookupValue[0];
var contextParams = Xrm.Page.context.getQueryStringParameters();
// if the guids are the same
if (XrmCommon.compareGuids(selection.id, contextParams._CreateFromId)) {
XrmCommon.setFieldValue("tmr_assistant", null);
// Now we know we are on a Quick Create form opened by the Assistant lookup
// so we can set values that make sense for a new Assistant record
XrmCommon.setFieldValue("customertypecode", XrmCommon.CONSTANTS.CONTACT_CUSTOMERTYPECODE_ASSISTANT);
}
}
}
The XrmCommon
stuff is just my wrapper around the Xrm.Page
object. In this code block, the function names match Xrm.Page functions, and the only magic they do is check that a field exists on the form before calling the base Xrm.Page
function.
Now you can do code specific to creating a new Assistant contact. We know that if the Assistant lookup GUID matches the _CreateFromId
, then this Quick Create form was launched from the Assistant lookup field on a Contact form. This Quick Create form is going to create a new Assistant Contact! In our case, the only additional default we wanted was to set the Relationship Type to a custom OptionSet value labeled "Assistant". Relationship mappings alone cannot do that. Since we only wanted one field defaulted, I just added one more line into the Assistant OnChange function that I already had.
If there were many more fields to set, more complicated logic to run, or other records to create or link while creating an Assistant, I would do the work in a plugin registered on the Pre-Operation Create event of Contacts. If lots of fields were going to get defaulted that the user probably should not change, then good form design would be to not put them on the form at all, and use plugin code to set the defaults during the Pre-Create event. Do it in the Pre-Create event to create the record with those values already set. If you do it in the Post-Create event, it will still work, but you cause an Update Contact event which is another transaction and audit history entry.
I think if we did not put the Assistant field on the Quick Create form, we probably would not get the circular reference error in the first place, and we could have used a plugin to default the Relationship Type to Assistant during the Pre-Create stage.
11/24/2015
Scott Hanselman has more details. Here is my personal summary of the news.
Visual Studio Code (like Sublime Text, Github Atom, etc., not an IDE) is Open Source, and in a new Beta that supports Extensions. Go support, Yeoman, and Open File in Vim (for your command line friends) are some featured extensions. It is cross platform too!
ASP.NET 5 RC1 is the cross platform release candidate for that Microsoft web framework thing that no one uses outside of Windows. The "5" part actually means you can develop, host, and run ASP.NET from Linux and OS X. It has a "Go Live" license which means if you want to deploy it to Production in Linux or OS X, Microsoft will support you. Documentation is improved compared to the usual MSDN pages too.
If you need to start right now, then this site should detect your OS and tell you how to get ASP.NET. On my phone, it just said view source.
Natively developing and running .NET bits outside of Windows?! Sorcery?!
.NET Core is the core bits of the .NET Framework, excluding things that tie directly to the Windows OS cough WPF cough WinForms. Not sure anyone outside Windows wants those anyway. Development of the Core is primarily driven by ASP.NET 5 workloads, but it aims to fulfill the desire for a modular runtime where features and libraries can be cherry picked. It is the "If you ask for a banana, you get only a banana and NOT the Gorilla and NOT the rest of the jungle" kind of thing. Conceptual overview of ASP.NET Core.
The .NET Execution Environment (DNX) is where you pick the .NET bits you want available for your runtime environment on your target platform. It is the SDK for your application. The DNX Version Manager (DNVM) allows you to wrangle your many DNX versions and flavors, and you can switch between them a la command line.
DNX overview on ASP Net docs.
Why would you want to run Microsoft code on Linux? Maybe that is what your IT staff knows. Maybe you do not want to have that one Microsoft box in the corner. The DNX and .NET Core lets you treat Linux as just another place to run your bits. Well, CentOS and CentOS derivations are not supported yet as of RC1's known issues.
I rip off Scott Hanselman's closing remarks from his blog post about all of this.
WHAT DOES IT ALL MEAN?
It means that you can build basically whatever you want, however you want. You can use the editor you like, the OS you like, and the languages you like. VSCode on a Mac doing Node and deploying to Azure? Check. ASP.NET 5 with C# to Docker Containers in a bunch of VMs created in Azure and managed with Microsoft Operations Manager? Check. And on and on. Node.js on VS, C to Raspberry Pi's in C in VS, whatever you dig. It's a whole new world.