<![CDATA[ ]]> /favicon.png Mon, 18 Mar 2024 02:39:03 -0500 60 <![CDATA[ Don't Do That, Do This: The .NET 6 Edition ]]> https://www.daveabrock.com/2021/12/08/do-this-not-that-the-net-6-edition/ 61ae9a6cfa774b003b53bd81 Wed, 08 Dec 2021 08:28:47 -0600 This post is my annual contribution to the 2021 C# Advent Calendar. Please check out all the great posts from our wonderful community!

Have you heard? .NET 6 has officially arrived. There's a lot of good stuff: C# 10, performance improvements, Hot Reload, Minimal APIs, and much more. As is the case for most releases, a few big features tend to get most of the hype.

What about the features and improvements that don't knock both your socks off but also help to make your daily development experience more productive? A lot of these "quality of life" features in .NET 6 can help by removing boilerplate and pain and can help you get to the point: shipping quality software.

As I get into the holiday spirit, consider this a stocking of sorts: just some random, little things that I hope you'll find enjoyable. (And despite the catchy title, there are always tradeoffs: do what works for you.)

Don't chunk large collections manually, use the new LINQ API instead

When working with large collections of data, you likely need to work with smaller "chunks" of it—a big use case would be if you're getting a lot of data back from a third-party API. If there's no pagination set up and you have a bunch of data in memory, you'll probably want a way to "page" or split up the data.

What's a .NET developer to do? Do things the hard way. You'd probably do some logic to set a page size, check what page you're on and if there are any elements left, then update your code when you add pages to a collection. It'd be a series of Take and Skip LINQ calls, or maybe even an extension method, like this one that's popular on Stack Overflow:

static class LinqExtensions
{
    public static IEnumerable<IEnumerable<T>> Split<T>(this IEnumerable<T> list, int parts)
    {
        int i = 0;
        var splits = from item in list
                     group item by i++ % parts into part
                     select part.AsEnumerable();
        return splits;
    }
}

Don't do that, do this: use the new LINQ Chunk API. When we call Chunk on 200 elements, we'll get 20 lists of 10 elements each.

int pageSize = 10;
IEnumerable<Employee[]> employeeChunk = employees.Chunk(pageSize);

If you need just a date, don't use DateTime, use DateOnly

If you want to work with dates and times in .NET, you typically start with DateTime, TimeSpan, or DateTimeOffset. What if you only need to work with dates and only need a year, month, or day? In .NET 6, you can use a new DateOnly struct. (You can also use TimeOnly. I also promise this isn't the start of a dating app.)

We've all done something like this:

var someDateTime = new DateTime(2014, 8, 24);
var justTheDate = someDateTime.ToShortDateString();

Console.WriteLine(someDateTime); // 8/24/2014 12:00:00 AM
Console.WriteLine(justTheDate); // "8/24/2014"

Don't do that, do this: use the DateOnly struct.

var someDateTime = new DateOnly(2014, 8, 24);
Console.WriteLine(someDateTime); // 8/24/2014

There's a lot more you can do with these, obviously, in terms of manipulation, calculation days between dates, and even combining with DateTime. Apart from being easier to work with, it also offers better type safety for just dates, a Kind property, and simpler serialization.

Don't wire up a lot of custom code for logging HTTP requests, use the new logging middleware

Before .NET 6, logging HTTP requests wasn't hard but a little cumbersome. Here's one way: you'd probably have logic to read the request body, use your expert knowledge of ASP.NET Core middleware to pass the stream to whatever is next on the pipeline, and remember to register your middleware—all for a very common activity for any reasonably complex web application.

Don't do that, do this: use the new .NET 6 HTTP Logging middleware to make your life easier.

Add this to your project's middleware:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    app.UseHttpLogging();

    // other stuff here, removed for brevity
}

Then, customize the logger as you see fit.

public void ConfigureServices(IServiceCollection services)
{
    services.AddHttpLogging(logging =>
    {
        logging.LoggingFields = HttpLoggingFields.All;
        logging.RequestHeaders.Add("X-Request-Header");
        logging.ResponseHeaders.Add("X-Response-Header");
        logging.RequestBodyLogLimit = 4096;
        logging.ResponseBodyLogLimit = 4096;
    });
}

You'll want to be watchful of what you log and how often you log it, but it's a great improvement.

Don't use hacks to handle fatal Blazor Server exceptions, use the ErrorBoundary component

What happens when an unhandled exception in Blazor Server occurs? It's treated as fatal because "the circuit is left in an undefined state which could lead to stability or security problems in Blazor Server." As a result, you might need to throw try/catch blocks all over the place as a preventive measure or encapsulate logic in JavaScript since no C# code runs after the unhandled exception.

Don't do that, do this: use the new ErrorBoundary component. It isn't a global exception handler but will help deal with unpredictable behavior, especially with components you don't and can't control.

You can see my article for the full treatment, but here's the gist: I can add an ErrorBoundary around the @Body of my default layout.

<div class="main">
    <div class="content px-4">
        <ErrorBoundary>
            @Body
        </ErrorBoundary>
    </div>
</div>

Of course, you probably want to go further than a catch-all boundary. Here's me iterating through a list:

<tbody>
    @foreach (var employee in Employees)
    {
        <ErrorBoundary @key="@board">
            <ChildContent>
                <tr>
                    <td>@employee.Id</td>
                    <td>@employee.FirstName</td>
                    <td>@employee.LastName</td>
                 </tr>
            </ChildContent>
            <ErrorContent>
                Sorry, I can't show @employee.Id because of an internal error.
            </ErrorContent>
        </ErrorBoundary>
    }
</tbody>

Don't use Server.Kestrel verbose logging for just a few things, use the new subcategories

If I want to enable verbose logging for Kestrel, I'd previously need to use Microsoft.AspNetCore.Server.Kestrel. That still exists, but there are also new subcategories that should make things less expensive. (Computationally; you're still on the hook for holiday gifts, sorry.)

In addition to Server.Kestrel, we now have Kestrel.BadRequests, Kestrel.Connections, Kestrel.Http2, and Kestrel.Http3.

Let's say you only want to log bad requests. You'd normally do this:

{
  "Logging": {
    "LogLevel": {
      "Microsoft.AspNetCore.Server.Kestrel": "Debug"
    }
  }
}

Don't do that. Do this:

{
  "Logging": {
    "LogLevel": {
      "Microsoft.AspNetCore.Server.Kestrel.BadRequests": "Debug"
    }
  }
}

Is it the sexiest thing you'll read today? Unless you love logging more than I thought you did, probably not. But it'll definitely make working with Kestrel verbose logging much easier.

Don't get lost in brackets, use C# 10 file-scoped namespaces instead

C# 10 introduces the concept of file-scoped namespaces.

Here's a typical use of namespaces in C#:

namespace SuperheroApp.Models
{
   public class Superhero
   {
      public string? FirstName { get; set; }
      public string? LastName { get; set; }
   }
}

Instead of downloading a bracket colorizer extension, do this instead:

namespace SuperheroApp.Models;

public class Superhero
{
   public string? FirstName { get; set; }
   public string? LastName { get; set; }
}

Also, for the record (ha!), you can make this even simpler if you want to take advantage of immutability and value-like behavior:

namespace SuperheroApp.Models;

public record Superhero(string? FirstName, string? LastName);

You should be aware of what you're getting with positional parameters, but we can all agree this isn't your grandma's C# (and I'm here for all of it, my apologies to grandma).

Speaking of brackets ...

Since we're on the topic of brackets, C# 10 also introduces extended property patterns.

You could use property patterns in a switch expression like this:

public static int CalculateSuitSurcharge(Superhero hero) =>

        hero switch
        {
            { Suit: { Color: "Blue" } } => 100,
            { Suit: { Color: "Yellow" } } => 200,
            { Suit: { Color: "Red" } } => 300
            _ => 0
        };

Don't do that. Do this:

public static int CalculateSuitSurcharge(Superhero hero) =>

        hero switch
        {
            { Suit.Color: "Blue" } => 100,
            { Suit.Color: "Yellow" } => 200,
            { Suit.Color: "Red" } => 300
            _ => 0
        };

Wrapping up

I hope you enjoyed this post, and you learned a thing or two about how .NET 6 can make your developer life just a little bit easier. Have you tried any of these? Do you have others to share? Let me know in the comments or on Twitter.

]]>
<![CDATA[ .NET 6 Has Arrived: Here Are A Few of My Favorite Things ]]> https://www.daveabrock.com/2021/12/03/dotnet-6-favorite-things/ 6195a9dd09e937003bbf0cbe Fri, 03 Dec 2021 06:59:58 -0600 This post was originally published on the Telerik Developer Blog.

For the second straight November, .NET developers have received an early holiday gift: a new release of the .NET platform. Last month, Microsoft made .NET 6 generally available—and hosted a virtual conference to celebrate its new features.

What were the goals of .NET 6? If you look at themesof.net, you can quickly see the themes of the .NET 6 release, which include some of the following:

  • Appeal to net-new devs, students, and new technologists
  • Improve startup and throughput using runtime exception information
  • The client app development experience
  • Recognized as a compelling framework for cloud-native apps
  • Improve inner-loop performance for .NET developers

We could spend the next ten blog posts writing about all the new .NET 6 improvements and features. I'd love to do that, but I haven't even started shopping for the holidays. Instead, I'd like to show off some of my favorite things that I'll be using on my .NET 6 projects. Most of these changes revolve around web development in ASP.NET Core 6 since this site focuses on those topics.

Hot Reload

Way back in April, I wrote here about the Hot Reload capability in .NET 6. We've come a long way since then, but I felt the same as I do now: this is the biggest boost to a .NET web developer's productivity over the last few years. If you aren't familiar with Hot Reload, let's quickly recap.

The idea of "hot reload" has been around for quite a few years: you save a file, and the change appears almost instantaneously. Once you work with hot reloading, it's tough to go back. As the .NET team tries to attract outsiders and new developers, not having this feature can be a non-starter to outsiders: it's table stakes for many developers. The concept is quite popular in the front-end space and .NET developers have been asking for this for a while. (Admittedly, introducing hot reload to a statically typed language is much more complex than doing it for a traditionally interpreted language like JavaScript.)

With .NET 6, you can use Hot Reload to make changes to your app without needing to restart or rebuild it. Hot Reload doesn't just work with static content, either—it works with most C# use cases and also preserves the state of your application as well—but you'll want to hit up the Microsoft Docs to learn about unsupported app scenarios.

Here's a quick example to show how the application state gets preserved in a Blazor web app. If I'm in the middle of an interaction and I update the currentCount from 0 to 10, will things reset? No! Notice how I can continue increasing the counter, and then my counter starts at 10 when I refresh the page.

You can leverage Hot Reload in whatever way you prefer: powerful IDEs like JetBrains Rider and Visual Studio 2022 have this capability. You can also utilize it from the command line if you prefer (yes, we do). It's important to mention that this works for all ASP.NET Core Web apps.

Minimal APIs

Have you ever wanted to write simple APIs in ASP.NET Core quickly but felt helpless under the bloat of ASP.NET Core MVC, wishing you could have an Express-like model for writing APIs?

The ASP.NET team has rolled out minimal APIs—a new, simple way to build small microservices and HTTP APIs in ASP.NET Core. Minimal APIs hook into ASP.NET Core's hosting and routing capabilities and allow you to build fully functioning APIs with just a few lines of code. Minimal APIs do not replace building APIs with MVC—if you are building complex APIs or prefer MVC, you can keep using it as you always have—but it's an excellent approach to writing no-frills APIs. We wrote about it in June, but things have evolved a lot since then. Let's write a simple API to show it off.

First, the basics: thanks to lambdas, top-level statements, and C# 10 global usings, this is all it takes to write a "Hello, Telerik!" API.

var app = WebApplication.Create(args);
app.MapGet("/", () => "Hello, Telerik!");
app.Run();

Of course, we'll want to get past the basics. How can I really use it? Using WebApplication, you can add middleware just like you previously would in the Configure method in Startup.cs. In .NET 6, your configuration takes place in Program.cs instead of a separate Startup class.

var app = WebApplication.Create(args);
app.UseResponseCaching(); 
app.UseResponseCompression(); 
app.UseStaticFiles();

// ...

app.Run();

If you want to do anything substantial, though, you'll want to add services using a WebApplicationBuilder (again, like you typically would previously in the ConfigureServices method in Startup.cs):

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<MyCoolService>();
builder.Services.AddSingleton<MyReallyCoolService>();

builder.Services.AddSwaggerGen(c =>
{
    c.SwaggerDoc("v1", new() {
        Title = builder.Environment.ApplicationName, Version = "v1" });
});

var app = builder.Build();

// ...

app.Run();

Putting it all together, let's write a simple CRUD API that works with Entity Framework Core and a DbContext. We'll work with some superheroes because apparently, that's what I do. With the help of record types, we can make our data models a little less verbose, too.

using Microsoft.EntityFrameworkCore;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDbContext<SuperheroDb>(o => o.UseInMemoryDatabase("Superheroes"));

builder.Services.AddSwaggerGen(c =>
{
    c.SwaggerDoc("v1", new() {
        Title = builder.Environment.ApplicationName, Version = "v1" });
});

var app = builder.Build();

app.MapGet("/superheroes", async (SuperheroDb db) =>
{
    await db.Superheroes.ToListAsync();
});

app.MapGet("/superheroes/{id}", async (SuperheroDb db, int id) =>
{
    await db.Superheroes.FindAsync(id) is Superhero superhero ?
    	Results.Ok(superhero) : Results.NotFound();
});

app.MapPost("/superheroes", async (SuperheroDb db, Superhero hero) =>
{
    db.Superheroes.Add(hero);
    await db.SaveChangesAsync();

    return Results.Created($"/superheroes/{hero.Id}", hero);
});

app.MapPut("/superheroes/{id}",
    async (SuperheroDb db, int id, Superhero heroInput) =>
    {
        var hero = await db.Superheroes.FindAsync(id);
        
        if (hero is null)
           return Results.NotFound();

        db.Update(heroInput);
        await db.SaveChangesAsync();

        return Results.NoContent();
    });


app.MapDelete("/superheroes/{id}",
    async (SuperheroDb db, int id) =>
    {
        var hero = await db.Superheroes.FindAsync(id);
        if (hero is null)
            return Results.NotFound();

        db.Superheroes.Remove(hero);
        await db.SaveChangesAsync();
        return Results.Ok();
    });

app.Run();

record Superhero(int Id, string? Name, int maxSpeed);

class SuperheroDb : DbContext
{
    public SuperheroDb(DbContextOptions<SuperheroDb> options)
        : base(options) { }

    public DbSet<Superhero> Todos => Set<Superhero>();
}

As you can see, you can do a lot with Minimal APIs while keeping them relatively lightweight. If you want to continue using MVC, that's your call—but with APIs in .NET you no longer have to worry about the overhead of MVC if you don't want it. If you want to learn more, David Fowler has put together a comprehensive document on how to leverage Minimal APIs—it's worth checking out.

Looking at the code sample above, it's easy to wonder if this is all a race to see what we can throw in the Program.cs file and how easy it is to get messy. This can happen in any app: I don't know about you, but I've seen my share of controllers that were rife for abuse.

It's essential to see the true value of this model—not how cool and sexy it is to write an entire .NET CRUD API in one file, but the ability to write simple APIs with minimal dependencies and exceptional performance. If things look unwieldy, organize your project as you see fit, just like you always have.

Simplified HTTP logging

How often have you used custom middleware, libraries, or solutions to log simple HTTP requests? I've done it more than I'd like to admit. .NET 6 introduces HTTP Logging middleware for ASP.NET Core apps that log information about HTTP requests and responses for you, like:

  • Request information
  • Properties
  • Headers
  • Body data
  • Response information

You can also select which logging properties to include, which can help with performance too.

To get started, add this in your project's middleware:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    app.UseHttpLogging();

    // other stuff here, removed for brevity
}

To customize the logger, you can use AddHttpLogging:

public void ConfigureServices(IServiceCollection services)
{
    services.AddHttpLogging(logging =>
    {
        logging.LoggingFields = HttpLoggingFields.All;
        logging.RequestHeaders.Add("X-Request-Header");
        logging.ResponseHeaders.Add("X-Response-Header");
        logging.RequestBodyLogLimit = 4096;
        logging.ResponseBodyLogLimit = 4096;
    });
}

If we want to pair this with a Minimal API, here's how it would look:

using Microsoft.AspNetCore.HttpLogging;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddHttpLogging(logging =>
{
    logging.LoggingFields = HttpLoggingFields.All;
    logging.RequestHeaders.Add("X-Request-Header");
    logging.RequestHeaders.Add("X-Response-Header");
    logging.RequestBodyLogLimit = 4096;
    logging.ResponseBodyLogLimit = 4096;
});

var app = builder.Build();
app.UseHttpLogging();

app.MapGet("/", () => "I just logged the HTTP request!");

app.Run();

Blazor improvements

.NET 6 ships with many great updates to Blazor, the client-side UI library that's packaged with ASP.NET Core. I want to discuss my favorite updates: error boundaries, dynamic components, and preserving pre-rendered state. Check out Jon Hilton's great post if you want to learn more about .NET 6 Blazor updates.

Error boundaries

Blazor error boundaries provide an easy way to handle exceptions within your component hierarchy. When an unhandled exception occurs in Blazor Server, it's treated as a fatal error because the circuit hangs in an undefined state. As a result, your app is as good as dead, it loses its state, and your users are met with an undesirable An unhandled error has occurred message, with a link to reload the page.

Inspired by error boundaries in React, the ErrorBoundary component attempts to catch recoverable errors that can't permanently corrupt state—and like the React feature, it also renders a fallback UI.

I can add an ErrorBoundary around the @Body of a Blazor app's default layout, like so.

<div class="main">
    <div class="content px-4">
        <ErrorBoundary>
            @Body
        </ErrorBoundary>
    </div>
</div>

If I get an unhandled exception, I'll get the default fallback error message.

Of course, you can always customize the UI yourself.

<ErrorBoundary>
    <ChildContent>
        @Body
    </ChildContent>
    <ErrorContent>
        <p class="custom-error">Woah, what happened?</p>
    </ErrorContent>
</ErrorBoundary>

If you'd like the complete treatment, check out my post from earlier this summer.  It still holds up (even the MAUI Man references).

Dynamic components

What happens if you want to render your components dynamically when you don't know your types ahead of time? It previously was a pain in Blazor through a custom render tree or declaring a series of RenderFragment components. With .NET 6, you can render a component specified by type. When you bring in the component, you set the Type and optionally a dictionary of Parameters.

<DynamicComponent Type="@myType" Parameters="@myParameterDictionary" />

I find dynamic components especially valuable when working with form data—you can render data based on selected values without iterating through a bunch of possible types. If this interests you (or if you like rockets), check out the official documentation.

Preserving pre-rendered state

Despite all the performance and trimming improvements with Blazor WebAssembly, initial load time remains a consideration. To help with this, you can prerender apps from the server to help with its perceived load time. This means that Blazor can immediately render your app's HTML while it is wiring up its dynamic bits. That's great, but it also previously meant that any state was lost.

Help has arrived. To persist state, there's a new persist-component-state tag helper that you can utilize:

<component type="typeof(App)" render-mode="ServerPrerendered" />
<persist-component-state />

In your C# code, you can inject PersistComponentState and register an event to retrieve and ultimately persist the objects. On subsequent loads, your OnInitializedAsync method can retrieve data from the persisted state—if it doesn't exist, it'll get the data from your original method (typically a service of some sort).

To see it in action, check out the Microsoft documentation.

C# 10 updates

Along with .NET 6, we've also got a new version of C#—C# 10. It ships with some great new features, like file-scoped namespaces, global usings, lambda improvements, extended property patterns, null argument checks, and much more. Check out Joseph Guadagno's blog post for more details, as well as Microsoft's blog post.

]]>
<![CDATA[ Use AzCopy to migrate files from AWS S3 to Azure Storage ]]> https://www.daveabrock.com/2021/11/21/use-azcopy-to-migrate-files-from-aws-s3-to-azure-storage/ 61992c9c09e937003bbf0d19 Sun, 21 Nov 2021 10:47:50 -0600 At the risk of upsetting Jeff Bezos, I recently moved a few million PDF files from Amazon S3 to Azure Storage (Blob Storage, in fact). I kept it simple and opted to use Microsoft's AzCopy tool. It's a command-line tool that allows you to copy blobs or files from or to an Azure Storage account. AzCopy also integrates with the Azure Storage Explorer client application, but using a UI wasn't ideal with the number of files I had.

In this post, I'd like to show an authorization "gotcha" to keep in mind and a few things I learned that might help you.

Authorize AzCopy to access your Azure Storage account and Amazon S3

After you install AzCopy—the installation is a .zip or .tar file depending on your environment—you'll need to let AzCopy know that you are authorized to access the Amazon S3 and Azure Storage resources.

From the Azure side, you can elect to use Azure Active Directory (AD) or a SAS token.  For me, I ran azcopy login to log in to Azure with my credentials. If you want to run inside a script or have more advanced use cases, it's a good idea to authorize a managed identity.

With ownership access to the storage account, I thought that was all I needed. Not so fast!

You will also need one of these permissions in Azure AD:

  • Storage Blob Data Reader (downloads only)
  • Storage Blob Data Contributor
  • Storage Blob Data Owner

Note: Even if you are a Storage Account Owner, you still need one of those permissions.

You'll need to grab an Access Key ID and AWS Secret Access Key from Amazon Web Services. If you're not sure how to retrieve those, check out the AWS docs.

From there, it's as easy as setting a few environment variables (I'm using Windows):

  • set AWS_ACCESS_KEY_ID=<my_key>
  • set AWS_SECRET_ACCESS_KEY=<my_secret_key>

Copy an AWS bucket directory to Azure Storage

I needed to copy all the files from a public AWS directory with the pattern /my-bucket/dir/dir/dir/dir/ to a public Azure Storage container. To do that, I called azcopy like so:

azcopy "https://s3.amazonaws.com/my-bucket/dir/dir/dir/dir/*" "https://mystorageaccount.blob.core.windows.net/mycontainer" --recursive=true

This command allowed me to take anything under the directory while also keeping the file structure from the S3 bucket. I knew that it was all PDF files, but I could have also used the --include-pattern flag like this:

azcopy "https://s3.amazonaws.com/my-bucket/dir/dir/dir/dir/*" "https://mystorageaccount.blob.core.windows.net/mycontainer" --include-pattern "*.pdf" --recursive=true

There's a lot of flexibility here—you can specify multiple complete file names, wildcard characters (I could have set multiple file types here), and even based on file modified dates. I might need to be more selective in the future, so I was happy to see all the options at my disposal.

Resuming jobs

If running AzCopy for a while, you might deal with a stopped job. It could be because of failures or a system reboot. To start where you left off, you can run azcopy jobs list to get a list of your jobs in this format:

Job Id: <some-guid>
Start Time: <when-the-job-started>
Status: Cancelled | Completed | Failed
Command: copy "source" "destination" --any-flags

With the correct job ID in hand, I could run the following command to pick up where I left off:

azcopy jobs resume <job-id>

If you need to get to the bottom of any errors, you can change the default log level (the default is INFO) and filter by jobs with a Failed state. AzCopy creates log and plan files for every job you run in the %USERPROFILE%\.azcopy directory on Windows.

After you finish, you can clean up all your plan and log files by executing azcopy jobs clean (or azcopy jobs rm <job-id> if you want to remove just one).

Performance optimization tips

Microsoft recommends that individual jobs contain no more than 10 million files. Jobs that transfer more than 50 million files can suffer from degraded performance because of the tracking overhead. I didn't need to worry about performance, but I still learned a few valuable things.

To speed things up, you can increase the number of concurrent requests by setting the AZCOPY_CONCURRENCY_VALUE environment variable. By default, Microsoft sets the value to 16 multiplied by the number of CPUs on your machine—if you have less than 5 CPUs, the value is 16. Because I have 12 CPUs, AzCopy set the AZ_CONCURRENCY_VALUE to 192.

If you'd like to confirm, you can look at the top of your job's log file.

2021/11/19 16:39:20 AzcopyVersion  10.13.0
2021/11/19 16:39:20 OS-Environment  windows
2021/11/19 16:39:20 OS-Architecture  amd64
2021/11/19 16:39:20 Log times are in UTC. Local time is 19 Nov 2021 10:39:20
2021/11/19 16:39:20 Job-Command copy https:/mystorageaccount.blob.core.windows.net/my-container --recursive=true 
2021/11/19 16:39:20 Number of CPUs: 12
2021/11/19 16:39:20 Max file buffer RAM 6.000 GB
2021/11/19 16:39:20 Max concurrent network operations: 192 (Based on number of CPUs. Set AZCOPY_CONCURRENCY_VALUE environment variable to override)
2021/11/19 16:39:20 Check CPU usage when dynamically tuning concurrency: true (Based on hard-coded default. Set AZCOPY_TUNE_TO_CPU environment variable to true or false override)
2021/11/19 16:39:20 Max concurrent transfer initiation routines: 64 (Based on hard-coded default. Set AZCOPY_CONCURRENT_FILES environment variable to override)
2021/11/19 16:39:20 Max enumeration routines: 16 (Based on hard-coded default. Set AZCOPY_CONCURRENT_SCAN environment variable to override)
2021/11/19 16:39:20 Parallelize getting file properties (file.Stat): false (Based on AZCOPY_PARALLEL_STAT_FILES environment variable)

You can tweak these values to see what works for you. Luckily, AzCopy allows you to run benchmark tests that will report a recommended concurrency value.

Wrap up

This was my first time using AzCopy for any serious work, and I had a good experience. It comes with a lot more flexibility than I imagined and even has features for limiting throughput and optimizing memory use.

To get started, click the link below to begin using AzCopy—and let me know what you think of it!

Copy or move data to Azure Storage by using AzCopy v10
AzCopy is a command-line utility that you can use to copy data to, from, or between storage accounts. This article helps you download AzCopy, connect to your storage account, and then transfer files.
]]>
<![CDATA[ Exploring C# 10: Use Extended Property Patterns to Easily Access Nested Properties ]]> https://www.daveabrock.com/2021/11/18/exploring-c-10-use-extended-property-patterns-to-easily-access-nested-properties/ 618a5137630a6c003b44cf2d Wed, 17 Nov 2021 19:34:28 -0600 Welcome back to my series on new C# 10 features. So far we've talked about file-scoped namespaces and global using declarations. Today, we'll talk about extended property patterns in C# 10.

Over the last few years, C# has made a lot of property pattern enhancements. Starting with C# 8, the language has supported extended property patterns, a way to match an object's specific properties.

For our example, let's continue the superhero theme with a Person, a Superhero that inherits from Person, and a Suit. We're going to calculate any special surcharges when making a particular superhero's suit.  

public class Person
{
    public string? FirstName { get; set; }
    public string? LastName { get; set; }
    public string? Address { get; set; }
    public string? City { get; set; }
}

public class Superhero : Person
{
    public int MaxSpeed { get; set; }
    public Suit? Suit { get; set; }
}

public class Suit
{
    public string? Color { get; set; }
    public bool HasCape { get; set; }
    public string WordsToPrint { get; set; }
}

Now let's create a quick object for Iron Man:

var tonyStark = new Superhero()
{
    FirstName = "Tony",
    LastName = "Stark",
    Address = "10880 Malibu Point",
    City = "Malibu",
    Suit = new Suit
    {
       Color = "Red",
       HasCape = false
    }
};

In our switch expression, let's say we want to determine special pricing based on a particular suit's color.

Here's how we'd do it in C# 9:

public static int CalculateSuitSurcharge(Superhero hero) =>

        hero switch
        {
            { Suit: { Color: "Blue" } } => 100,
            { Suit: { Color: "Yellow" } } => 200,
            { Suit: { Color: "Red" } } => 300
            _ => 0
        };

With C# 10, we can use dot notation to make it a bit cleaner:

public static int CalculateSuitSurcharge(Superhero hero) =>

        hero switch
        {
            { Suit.Color: "Blue" } => 100,
            { Suit.Color: "Yellow" } => 200,
            { Suit.Color: "Red" } => 300
            _ => 0
        };

Much like before, you can also use multiple expressions at once. Let's say we wanted to charge a surcharge for a notoriously difficult customer. Look at how we can combine multiple properties in a single line.

public static int CalculateSuitSurcharge(Superhero hero) =>

        hero switch
        {
            { Suit.Color: "Blue" } => 100,
            { Suit.Color: "Yellow" } => 200,
            { Suit.Color: "Red", Address: "10880 Malibu Point" }  => 500,
            { Suit.Color: "Red" } => 300
            _ => 0
        };

That's all there is to it. Will this feature change your life? No. Even so, it's a nice update that allows for cleaner and simpler code.

There's a lot more you can do with property patterns as well, so I'd recommend hitting up the Microsoft Docs to learn more.

Patterns - C# reference
Learn about the patterns supported by C# pattern matching expressions and statements.

If you're interested in the design proposal, you can also find it below.

Extended property patterns - C# 10.0 draft specifications
This feature specification describes extended property patterns. These enable more convenient syntax to pattern match on properties nested in objects contained in an object.

Let me know your thoughts below. Happy coding!

]]>
<![CDATA[ Exploring Fiddler Jam: What's New? ]]> https://www.daveabrock.com/2021/11/06/exploring-fiddler-jam-whats-new/ 6141d02215458e004b3a6214 Sat, 06 Nov 2021 10:00:42 -0500 This post was originally featured on the Telerik Developer Blog.

In August, I introduced you to Fiddler Jam. What is Jam? In short, it’s a solution to help support and development teams troubleshoot remote issues with web applications in a quick, easy and secure fashion. Think of Jam as a band playing perfectly in sync: replace the instruments with end users, support staff and software developers playing in tune—and better yet, you’ll never have to take requests for “Free Bird.”

If this is your first time learning about Jam, here's a quick rundown. Users of your site can use a Chrome extension to share the full context of an issue, allowing your support team to analyze the data immediately. If needed, developers can reproduce and debug the issue using the Fiddler Everywhere debugging proxy. (If you'd like to learn more about Fiddler Jam basics, check out Rob Lauer's article.)

Jam already has a great set of features, but the development team—let's call them Jammers—has pushed even more capabilities and improvements since we last talked. We can put it through its paces by taking a look at my Blast Off with Blazor site, which I've written about. Let's see what the Jammers have been up to.

Video recordings

When we last talked, we were able to see screenshots of a lot of user actions so we could see what users were doing when issues occurred. When you start to look at a sequence of actions, though, it might be hard to work through all the steps one by one. My favorite update is definitely a new video recording feature.

After a user sends your support team a link to their Jam session, you’ll notice that the first item in the log (“Capturing Started”) is a full video recording of their experience. This only records the tab in which the user allowed their session to be captured. At this phase, you can choose to watch the whole video.

As you watch it, notice how the recording highlights the corresponding log entries as they are happening. You can pause the video where needed and go right where you need to be with full context. You can also click on an event in the log and be sent to the place in the video where it occurred. From there, you can play the recording or go back and forward as needed. Instead of dealing with word-of-mouth context, you’ve got everything you need.

In my case, I have a 14-second capture full of lots of media, searching and scrolling. Watch the play icon at the left and how it jumps through the log events as time passes. Very cool.

While this is a powerful feature, Corporate Dave is here to inform you that Jam enables video recordings and screenshots by default. Be careful when dealing with sensitive data like passwords, credit card numbers, or anything else you want to keep private. Read Fiddler Jam Security for details.

Better tracking for user events

When we last Jam'ed, I briefly discussed how Fiddler Jam could track a few common events, like button clicks. The Jammers have cranked this up a notch and now allow you to track a ton of other events like clicking on div elements, keyboard events, and scroll events.

I don't know about you, but when I debug front-end issues, it isn't as easy as finding that a file wasn't found or there's a big error in the browser console. It's subtle things that make me clamor for the simplicity of Geocities, like event bubbling and misapplied styles.

The Jammers now give us the ability to click the Inspectors tab to understand metadata about the event, like tag type and CSS class. In addition to video recording, you'll see I also have the event highlighted (in this case, typing into a search box) in a convenient screenshot.

Capture storage information

A new option, Capture storage info, allows you to capture local storage and session storage data. If you aren't familiar, local and session storage are HTML5 storage objects that allow you to store data on the client using convenient key-value pairs. Local storage exists until deleted and has no expiration date, while session storage deletes the data when the user closes a browser.

While Corporate Dave will tell you to not store sensitive data here—as local/session storage isn't super secure and, also, the data is easily accessible and hackable from standard browser tooling—it can sometimes be a lightweight solution for simple data lookup. It can work well with simple user preferences or activities: if we want to remember that a user prefers Dark Mode or has already seen our introductory modal that tours our website, we can store it in local storage for easy retrieval.

The Jammers have introduced the ability for you to view storage details from an aptly named Storage Details tab. If I go to a random website—look, the Telerik Blogs website, what a coincidence!—I can determine when a local storage value was set, and what it is.

How do the Advanced Options work now?

With all these great updates, let's take a look at the new Advanced Options that users can set before capturing a session with Fiddler Jam. Most of these are enabled by default, so you won't have to ask users to explicitly turn them on, but let's review. (Don't worry, there isn't a quiz ... yet.)

Unless stated otherwise, all these options are enabled by default.

  • Capture video - As we discussed, this option captures video recording from the active, inspected Chrome tab. Corporate Dave says: "If the screen shows sensitive data, consider disabling it."
  • Capture screenshots – This option adds a screenshot from the active Chrome tab. Corporate Dave says: "If the screen shows sensitive data, consider disabling it."
  • Capture console – This option records any developer console outputs. Again, consider disabling this if your logs have sensitive information.
  • Capture storage info - This option captures local or session storage from the Chrome tab.
  • Mask cookies – This option masks cookie values so that they aren’t visible in the logs. The cookie names will still be visible.
  • Mask all post data – This option masks data sent along in a POST request, like form information. This option is not enabled by default.
  • Disable cache – This option asks the browser to skip network requests cache and download fresh content from the server.  

Updated sharing options

In addition to these new Advanced Options, end users can also view capture details and customize who needs to access the session's details.

First, users can access session details by clicking Capture Successful! I imagine the exclamation point to show the excitement that a user's issue will be diagnosed so quickly! Yes!

Users can click this area to see a quick event log. It's a nice sanity check so users can confirm they are sending you the right logs. If they have 92 tabs open—I would never, but some might—this will help provide a sanity check to make sure they are passing the correct session to you, and not cute cat videos. Again, I would never.

After optionally confirming users are sending the right tab, users can choose to share the details as a link or with specific people. When sharing as a link, I (and Corporate Dave) would suggest using the Password Protection feature.

When you share with specific people, enter the recipient email addresses(es) to send it over electronic mail. When you do this, only the people with corresponding email addresses will have access to it.

More coming!

There's more coming to Fiddler Jam soon—consider bookmarking the Fiddler Jam roadmap for details.

Get started with Fiddler Jam today by going to the Fiddler Jam site and signing up for a free 14-day trial.

Pretty good work from the Jammers, yes? Good work, team. Take the rest of the day off. You've deserved it.

]]>
<![CDATA[ Saying goodbye to The .NET Stacks ]]> https://www.daveabrock.com/2021/11/04/bye-dotnet-stacks/ 6182c17efb01aa003b337307 Thu, 04 Nov 2021 07:42:14 -0500 I wanted to write a quick note to let everyone know that I will no longer be producing The .NET Stacks. This does not mean I'll be going away or leaving the wonderful .NET community—it just means I'll be producing content in other ways.

I wouldn't call the newsletter a smashing success by any means, but was thrilled that something I started while bored during a pandemic grew to a few thousand readers every week on various mediums (whether over e-mail, dev.to, or daveabrock.com). I'm grateful to all of you for reading and being so supportive—and if it helped you, I'm even happier. It's been a lot of fun and I've even "met" quite a few friends along the way, even if we've never physically met.

I initially set out to produce something more substantive and not a weekly thing where I send out links every week, as I felt there wasn't a lot of that out there. I wanted to write about trends and topics, and interview folks, and provide a little more context. I quickly realized why there wasn't a lot of that out there: it's so much work to do that on a weekly basis, week in, week out.

With a busy personal life, I'm passionate about community work but with limited time. I did a lot of work to automate things as I could, but The .NET Stacks doesn't leave time for any other community work or projects that I'm passionate about. To me, I could either slim down the newsletter and its quality and make it easier to manage, or focus work in other areas. I've chosen the latter.

It's hard to believe I've produced 70 weekly issues over 18 months, and have used the newsletter to ramble about .NET to the tune of about 104,000 words. Thanks to you all: thanks for reading every week, and thanks for those of you who have emailed me nice words or to ask if I've lost my marbles—both valid, by the way—and thanks for listening. I hope you'll continue to keep up with me at daveabrock.com. Talk to you soon and keep in touch.

]]>
<![CDATA[ The .NET Stacks #68: 🍿 What a week ]]> https://www.daveabrock.com/2021/10/26/dotnet-stacks-68/ 6176d3fc476ac6003bb3a814 Tue, 26 Oct 2021 18:27:37 -0500 Happy ... Tuesday? It's been quite an interesting week. We have one thing to discuss, and then straight to the links.

  • .NET Open Source: Who does Microsoft want to be?
  • Last week in the .NET world

.NET Open Source: Who does Microsoft want to be?

To put it lightly, last week was a very eventful week for the .NET community. I'd like to talk about what happened, what we learned, and what it means for .NET open source.

Before diving in, I want to say that I am voicing my opinions only and my intent is not to "create more drama" or keep harping on about past events. While Microsoft resolved the current issue a few days ago, I often find that I can view events with better clarity after spending a few days stepping back and thinking about it. You might agree with what I have to say, think I'm blowing it out of proportion, or somewhere in between. I expect that.

What happened? If you've been subscribed here for the last few months, you know that one of the core objectives of .NET 6 is to improve the "inner development loop" for the .NET platform. The core feature is the "Hot Reload" functionality, which allows developers to see changes applied instantly while running their application. Already viewed as table stakes for modern web application development—front-end developers have enjoyed this capability for many years—it's finally making its way to the .NET platform. As the .NET team is trying to embrace new developers or developers from other platforms, it's a key selling point. If you try to get React or Angular developers to consider something like Blazor, and they learn it isn't available out of the box, you'll likely get a "Seriously?" Once you have it, you can't live without it.

Anyway, Hot Reload was announced and then rolled out in .NET 6 Preview 3, and I wrote about it in detail all the way back in April. At the time, it was rolled out in the .NET CLI via dotnet watch as a lot of new features are. Eventually, Microsoft said, it would make its way to Visual Studio or whatever tool you enjoy—but since it resides in the core SDK, you could use it across a wide variety of environments and platforms. It worked well and as a whole, the community was thrilled. Right around the time .NET 6 RC 2 was released last week—the last "preview" release that ships with a go-live production license—Microsoft decided to scrap it from dotnet watch and only make it available for Visual Studio 2022 "so we can focus on providing the best experiences to the most users" so long as you enjoy Visual Studio on Windows. As if it wasn't abundantly clear, .NET engineers leaked to The Verge that this was a business decision made at the CVP level.

After a ton of almost universal backlash, the capability was restored to dotnet watch, in a blog post whose only strength was providing political cover and delivered on none of what we all wanted: being honest, authentic, or transparent with the community.

I don't think anyone is confusing a trillion-dollar corporation for being a charitable organization. And it is true that a majority of .NET developers are on Visual Studio. If Microsoft decided to make this a Visual Studio feature from the beginning, the pushback would be minimal.

The deserved furor comes from Microsoft ripping out a core feature right before completing a release and done with a PR that was closed to community feedback. This all feels like we got transported to Ballmer-esque Microsoft, and not the "new Microsoft" that appears to be less concerned with sticking you to specific tooling but doing all it can to meet you where you are, so long as you deploy your workloads to Azure.

With the popularity of Visual Studio Code and the emergence of wonderful competitive tools like JetBrains Rider, Microsoft is at odds with being open in the name of that fat Azure cash or being protective of pricey enterprise licensing with Visual Studio. This week highlighted their conflict of interest and desire to have their cake and eat it, too.

We've been slowly reaching a breaking point. Over the last few years, Microsoft restricted debugger licensing, the abandonment of MonoDevelop, and now we have the hot reload controversy. Include the .NET Foundation issues, and Microsoft's "open-source friendly" reputation has taken its hits. The common logic is that while Visual Studio is a moneymaker, for sure, it's a rounding error compared to Azure, so it's good business sense for Microsoft to shed its protectionism in favor of inclusiveness and community goodwill.

Of course, none of this criticism should take away from the amazing work the .NET team has done for both their platform and open source. As a matter of fact, that team is likely the most upset here as a decision out of control has hurt their reputation with the community. But where does .NET fit? For me, it begs the question about how Microsoft views .NET: is it an open, innovative platform that drives Azure investment, or a piggy bank that Microsoft executives can shake until every last penny comes out?

To Microsoft's credit, they should be commended for writing their wrongs so quickly. They listened and reacted, and pushed out an apology in just a few days—an amazing feat at a trillion-dollar company with many layers of top-down management. All in all, I think last week was a wonderful testament to the true value of community, where a group of devoted people can help drive change (or undo unfortunate decisions).

However, the apology post is littered with inconsistencies and seems to be protecting senior management's decisions and not their reputation with the community.

The post explains that: "In our effort to scope, we inadvertently ended up deleting the source code instead of just not invoking that code path." In my opinion, this wasn't an engineer running the wrong command, but trying to make a last-minute change at the demands of their management and closing the PR for comments. While reading the post, I couldn't help but feel how it contradicted this new open Microsoft—build what you want, on any platform you want, so long as you use Azure—and reeked of the political cover and protectionism that defined its past.

Where do we go from here? Is this a team hitting some bumps with a recent reorg, or a sign of things to come? Have the higher-ups at Microsoft learned their lessons, or is this the beginning of protecting the most coveted SDK features to their exclusive tooling? And most importantly, how will Microsoft balance conflicts of interest with all their tooling choices? As Visual Studio Code is an amazing editor, as Dustin Gorski and others have wondered aloud, isn't it strange that Microsoft's own .NET platform is one of the worst supported platforms on Code?

If Microsoft wants to be honest with its community, it needs to answer: What do you want .NET to be? To keep it an open, open-source friendly platform, or to make it a piggy bank? It appears to be that Microsoft wants it both ways and risks harming its reputation with the community while promising openness.

We know Microsoft will never have a perfect relationship with its developer community. At the end of the day, their responsibilities are to their shareholders. With that said, I have all the faith in the world in the .NET team. I admire them and all the work they have done on the .NET platform. I hope the team aligns with their management and can eventually be transparent with the community about what they truly want .NET to be—no BS, no corporate-speak.


🌎 Last week in the .NET world

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

📔 Languages

🔧 Tools

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #67: 🆕 .NET 6 RC2 arrives ]]> https://www.daveabrock.com/2021/10/24/dotnet-stacks-67/ 616ad8af35ee50003b3897c5 Sun, 24 Oct 2021 11:39:29 -0500 Welcome to another week, full of top-notch product announcements and the proclamation of "View Source" as a crime.  I too am a hacker, apparently.

Anyway, here's what we have going on this week:

  • Web updates with .NET 6 RC 2
  • New .NET 6 LINQ API: Chunk
  • Last week in the .NET world

Web updates with .NET 6 RC 2

Right on time, last week Microsoft rolled out the .NET 6 RC 2 release. It's the second of two "go live" RCs that are actually supported in production. Richard Lander's announcement post gets you up to speed on what is new with C# 10, including record structs, global usings, file-scoped namespaces, interpolated strings, and extended property patterns.

Despite being oh-so-close to the official release next month, the ASP.NET Core team was busy shipping updates for this release and I'd like to pay attention to those. We saw updates in two big areas: native dependencies and Minimal APIs—the latter of which I've admittedly beaten to death, but also plays a pivotal future in how you build new APIs in the .NET ecosystem (if you want to get over MVC, of course).

With Blazor WebAssembly, in .NET 6 you can now use native dependencies that are built to run on WebAssembly. Using new .NET WebAssembly build tools, you can statically link native dependencies. You can use any native code with this, like C/C++ code, archives, or even standalone .wasm modules. If you have some C code, for example, the build tools can compile and link it to a dotnet.wasm file. To extend it some more, you can use libraries with native dependencies as well.

As for Minimal APIs, you can now leverage parameter binding improvements. With RC2, you can use TryParse and BindAsync for inherited methods. For example, BindAsync allows you to bind a complex type using inheritance. Check out Daniel Roth's blog post for details. The RC2 release also makes some OpenAPI enhancements and includes some analyzers to help you find issues with middleware issues or route handling.

It's hard to believe the next .NET 6 announcement will be the full release itself. As part of that .NET Conf will celebrate the launch, and Visual Studio 2022 will launch on November 8. In the year 2021. (I know.)


New .NET 6 LINQ API: Chunk

If you haven't heard, .NET 6 will roll out some nice LINQ API improvements. Matt Eland recently wrote a nice piece about them. As a whole, it's nice to see .NET 6 roll out quite a few API improvements that were previously done with some light hacking. We all do some light hacking, of course, but for millions of .NET developers there are a lot of common use cases—and it's nice to see those being addressed.

My favorite recent LINQ API improvement is the Chunk API. If you work with large collections of objects, you can now chunk them in case you want to work through pagination or other "chunking" use cases. For paging, you previously had to set a page size, loop through the collection, add to some paging collection, and update some counts.

Instead, as Matt notes, you could try something like this:

IEnumerable<Movie[]> chunks = movies.Chunk(PAGE_SIZE);

This should really help folks page through big API datasets. When you don't control the data coming back, you had to set this up yourself. Very nice.


🌎 Last week in the .NET world

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

🔧 Tools

🏗 Design, testing, and best practices

🎤 Podcasts and videos

]]>
<![CDATA[ Exploring C# 10: Global Using Declarations ]]> https://www.daveabrock.com/2021/10/21/csharp-10-global-usings/ 616d621235ee50003b3897f5 Wed, 20 Oct 2021 22:18:57 -0500 Welcome back to my series on new C# 10 features. I kicked off this series by writing about file-scoped namespaces, a simple but powerful way to remove unnecessary verbosity from your C# code. This time, we're talking about a new feature that achieves similar goals: global using declarations.

To see how this works, let's revisit my model from the last post, a Superhero:

namespace SuperheroApp.Models;

public class Superhero
{
   public string? FirstName { get; set; }
   public string? LastName { get; set; }
   public string? StreetAddress { get; set; }
   public string? City { get; set; }
   public int? MaxSpeed { get; set; }
}

Let's say I wanted to do some basic work with this model in a standard C# 10 console application. It would look something like this.

using SuperheroApp.Models;

var heroList = new List<Superhero>()
{
    new Superhero
    {
        FirstName = "Tony",
        LastName = "Stark",
        StreetAddress = "10880 Malibu Point",
        City = "Malibu",
        MaxSpeed = 500
    },
    new Superhero
    {
        FirstName = "Natasha",
        LastName = "Romanova",
        MaxSpeed = 200
     }
};

foreach (var hero in heroList)
    Console.WriteLine($"Look, it's {hero.FirstName} {hero.LastName}!");

Console.WriteLine($"The first superhero in the list is {heroList.First().FirstName}.");

You'll notice that thanks to top-level statements, the file is already looking pretty slim. If I create a new file, like GlobalUsings.cs, I could store any usings that I want to be shared across my project's C# files. While you can declare global usings anywhere, it's probably a good idea to isolate it somewhere. (This is scoped for the project and not the solution, so if you want this across multiple files you'd want to specify global usings in every project.)

global using ConsoleApp6.Models;

When I go back to my Program.cs file, Visual Studio reminds me that Superhero.Models is referenced elsewhere (in my GlobalUsings.cs), so it can be safely removed.

I can also use the global static keyword when I use common static methods, like the Math class or writing to Console.  While static usings aren't new—they've been around since C# 6—I can use it freely with global usings. Let's update my GlobalUsings.cs file to the following:

global using SuperheroApp.Models;
global using static System.Console;

Back in my Program.cs, Visual Studio tells me I don't need to use Console.

Aside from supporting standard namespaces (and system ones) and static namespaces, you can also use aliases. Let's say I wanted to globally use System.DateTime and alias it as DT. Here's how my GlobalUsings.cs file looks now:

global using SuperheroApp.Models;
global using static System.Console;
global using DT = System.DateTime;

Going back to my main Program.cs, here's how everything looks now:

var heroList = new List<Superhero>()
{
    new Superhero
    {
        FirstName = "Tony",
        LastName = "Stark",
        StreetAddress = "10880 Malibu Point",
        City = "Malibu",
        MaxSpeed = 500
    },
    new Superhero
    {
        FirstName = "Natasha",
        LastName = "Romanova",
        MaxSpeed = 200
     }
};

foreach (var hero in heroList)
    WriteLine($"Look, it's {hero.FirstName} {hero.LastName}!");

WriteLine($"The first superhero in the list is {heroList.First().FirstName}.");
WriteLine($"Last ran on {DT.Now}");

Are some common namespaces globally available by default?

When I created a collection and read from it, I'm using calls from a few standard Microsoft namespaces: System.Collections.Generic and System.Linq. You may have noticed something: why didn't I need to explicitly include these namespaces in a global usings file? The answer lies in the obj/Debug/net6.0 folder. After you build your project, you'll notice a GlobalUsings.g.cs file, which contains implicit usings. Enabled by default in .NET 6, you don't need to explicitly include them.

// <auto-generated/>
global using global::System;
global using global::System.Collections.Generic;
global using global::System.IO;
global using global::System.Linq;
global using global::System.Net.Http;
global using global::System.Threading;
global using global::System.Threading.Tasks;

The funky global:: syntax ensures that you don't see collisions if you have your own namespaces. For example, if I had a few drinks one night and decided to write my own (very buggy) I/O library and call it DaveIsCool.System.IO, this ensures I'm using the Microsoft namespace.

These usings are driven by an <ImplicitUsings> flag in my project file.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net6.0</TargetFramework>
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
  </PropertyGroup>
</Project>

If you remove the <ImplicitUsings> line, for example, you would see build errors:

A word of caution: in .NET 6 implicit usings are enabled by default, so if implicit usings aren't for you, this is where you would disable it. For details, take a look at the following document from the .NET team.

Breaking change: Implicit `global using` directives in C# projects - .NET
Learn about the breaking change in .NET 6 where the .NET SDK implicitly includes some namespaces globally in C# projects.

Can I incorporate global usings with project configuration?

To me, it does seem a little strange to be using C# source files to configure project details. If you don't want to use a C# file to specify your global using declarations, you can delete your file and use MSBuild syntax instead.

Once you open your project file, you can encapsulate your global usings in an <ItemGroup>, like so:

<ItemGroup>
    <Using Include="SuperheroApp.Models"/>
    <Using Include="System.Console" Static="True" />
    <Using Include="System.DateTime" Alias="DT" />
</ItemGroup>

Do I have to use these?

Much like other .NET 6 improvements, you can use it as much or as little as you want. You can use nothing but global and implicit using statements, mix them with regular namespace declarations, or do things as we've always done before. It's all up to you.

Between C# 9 top-level statements, implicit and global usings, and file-scoped namespaces, the C# team is removing a lot of clutter from your C# files. For example, here's how our Program.cs file would look in C# 8 or older.

using System;
using System.Collections.Generic;
using System.Linq;
using SuperheroApp.Models;

namespace SuperheroApp
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var heroList = new List<Superhero>()
            {
                new Superhero
                {
                    FirstName = "Tony",
                    LastName = "Stark",
                    StreetAddress = "10880 Malibu Point",
                    City = "Malibu",
                    MaxSpeed = 500
                },
                new Superhero
                {
                    FirstName = "Natasha",
                    LastName = "Romanova",
                    MaxSpeed = 200
                 }
            };

            foreach (var hero in heroList)
                Console.WriteLine($"Look, it's {hero.FirstName} {hero.LastName}!");

            Console.WriteLine($"The first superhero in the list is {heroList.First().FirstName}.");
            Console.WriteLine($"Last ran on {DT.Now}");
        }
    }
}

With top-level statements and global and implicit usings, we now have this.

var heroList = new List<Superhero>()
{
    new Superhero
    {
        FirstName = "Tony",
        LastName = "Stark",
        StreetAddress = "10880 Malibu Point",
        City = "Malibu",
        MaxSpeed = 500
    },
    new Superhero
    {
        FirstName = "Natasha",
        LastName = "Romanova",
        MaxSpeed = 200
     }
};

foreach (var hero in heroList)
    WriteLine($"Look, it's {hero.FirstName} {hero.LastName}!");

WriteLine($"The first superhero in the list is {heroList.First().FirstName}.");
WriteLine($"Last ran on {DT.Now}");

What do you think? Let me know in the comments. Stay tuned for my next topic on CRUD endpoints with the new .NET 6 Minimal APIs.

]]>
<![CDATA[ The .NET Stacks #66: 🧀 Who moved my cheese? ]]> https://www.daveabrock.com/2021/10/18/dotnet-stacks-66/ 61638c98bec39c004adc7061 Sun, 17 Oct 2021 19:38:34 -0500 I know I'm a day or so late. Sorry, I was reading about Web 3.0 and still don't know what it is. Do you? After reading this, it made me want to cry, laugh, and then cry again. As if Dapr and Dapper isn't enough, now we have "dapps."

Anyway, here's what we have going on this week (or last week):

  • Who moved my cheese: New templates for .NET 6 might surprise you
  • Community spotlight: Marten v4 is now live
  • Last week in the .NET world

Who moved my cheese: New templates for .NET 6 might surprise you

As the release of .NET 6 is rapidly approaching—with .NET 6 RC2 coming out very soon with the official release next month—you might be taken aback by C# template changes. The .NET team is leveraging top-level statements and implicit using directives for some new .NET 6 templates. For example, top-level statements are helping to drive new capabilities like Minimal APIs. Microsoft has put together a new document, New C# templates generate top-level statements, and you should check it out.

With .NET 6, if you create a new console app using dotnet new console (or from Visual Studio tooling), you might expect to see this:

using System;

namespace MyVerboseApp
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine("Hello World!");
        }
    }
}

Think again. You'll see this instead:

// See https://aka.ms/new-console-template for more information
Console.WriteLine("Hello, World!");

Much like with top-level statements, you probably have one of two opinions: it eliminates boilerplate and makes it simpler and more beginner-friendly; or it provides too many abstractions and "I didn't ask for this." I do like top-level statements and agree it eliminates a lot of boilerplate. When it comes to using args, though, I like to see where they are being passed in and not just a magic variable.

As of now, to stay consistent with Microsoft pushing to make C# more concise and accessible, it looks like this will be the default template. You can always use the "old" program style from the terminal using framework and target-framework-override flags. This allows you to use the "old" template but force the project framework to be .NET 6.0. (It would have been simpler to pass in --leave-my-stuff-alone, but I digress.)

dotnet new console --framework net5.0 --target-framework-override net6.0

In terms of the tooling experience, the obvious answer will be to eventually introduce a checkbox in Visual Studio to enable/disable these simplified layouts. Until then, you can downgrade the target framework moniker (TFM) to net5.0.

If you don't like it, you can file an issue in the dotnet/templating repo (or chime in on a similar issue), or even create your own templates.


Community spotlight: Marten v4 is now live

Do you know about Marten? It's a popular .NET library that allows you to use Postgresql as both a document database and a powerful event store. Jeremy Miller wrote this week about the new v4 release. Jeremy got my attention with quoting "the immortal philosopher Ferris Bueller" but also highlighted the massive changes:

  • Reducing object allocations and dictionary lookups
  • LINQ support improvements
  • Better tracking for flexible metadata
  • Event sourcing improvements

It looks to be a great release, and congrats to the team and all the great contributors. Check out the GitHub repository for more details about the project.


🌎 Last week in the .NET world

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🏗 Design, testing, and best practices

🎤 Podcasts and videos

]]>
<![CDATA[ The .NET Stacks #65: 💡 Is there hope for a modern C# model? ]]> https://www.daveabrock.com/2021/10/10/dotnet-stacks-65/ 61587c23069a18003d6d3d79 Sun, 10 Oct 2021 11:33:00 -0500 Happy Monday to you all and happy October. I hope you have a good week. My goal: to develop a data model simpler than this one.

Here's what we have this week:

  • Exploring modern C#
  • Custom deployment layouts for Blazor WebAssembly apps
  • Last week in the .NET world

Exploring modern C#

With the official release of .NET 6 next month, we'll see the release of C# 10. Much like the .NET platform as a whole, the language is beginning to adopt incremental, annual releases going forward. The C# 10 release brings us features like record structs, global using directives, extended property patterns, file-scoped namespaces, and so on. Richard Lander's .NET 6 RC2 announcement will be looking into some of these.

As C# has shifted over the years from a strict OO language to a more general-purpose one, it's experienced some pushback on how heavy it's become. We tend to go in circles in the .NET community: yes, there's a lot—it needs to evolve to be a modern language, but it also can't break people who rely on the older features. Right on time, the newsletter Twitter account posted a link to an article asking: is C# too complex? It spawned an interesting discussion, both about the article itself and of C#.

I like what Kathleen Dollard, a Microsoft PM for .NET Core CLI and Languages, had to say:

Still, I don't think it's realistic to expect a single way to do things, but it can get out of hand. As a simple example, there are at least a half dozen ways to check if a string is empty. It's that there are so many ways to do a single thing: some are modern and some best practices, some not—all the while, we know the language will never take anything away for backward compatibility reasons.

For us experienced developers, we can say the language is evolving and just "use what works best for you." What if you don't, or don't want the mental weight of dealing with it all?

As I was listening to C# language designer Mads Torgersen on Bryan Hogan's No Dogma podcast, I found this following exchange quite interesting. (I know there's a lot in this exchange, but I wanted to post it all to avoid taking anything out of context.)

Q: All of these new features, feel like new ways of doing old things and I would imagine they could get confusing. A criticism about C# that's been around for a while is that there are so many ways to do one thing anyway, and now you've added a whole bunch of new ways to do things I could do already.
A: Yes. That's a real challenge. If you have a programming language and you want to keep it the best option for somebody making the next project, it has to be modern. It has to have ways of doing things that are as expressive, as sleek,  as ergonomic, performant, fast, and efficient as the other languages. So we have to keep evolving to do that. Now at the same time, we don't like throwing things out because we break people's existing code. What's a programming language designer to do?
We've lived with this conundrum for a while. I think we're starting to circle in on at least a partial solution to it. I can't say for sure that we're going to do this or how we're going to do it, but ... someone like Bill Wagner, who is doing our documentation and working on the C# Standard ...he's very essential in this discussion, said: "How about we try to define modern C#?" If you're coming to C# fresh or even as an existing C# programmer in general—if you have a pretty clean slate, this is the subset that you should be using. If you're outside of that subset it's either because you're trying something fancy like using Span<T> to do efficiency stuff or unsafe code, or it's because you're doing some legacy things. Otherwise, these are the ways you should be doing things.
And then we could reinforce that from the documentation point of view, from a learning point of view, and also from a tooling point of view—we would take these templates and take those little suggestions and say: "Hey, that's kind of an old fashioned thing you're doing there, it would be sleeker to use pattern matching here, why don't you?" In fact, we have plenty of those in place already. We just don't have a coordinated set of features that are like "this is modern C#."
Then that evolves with the language, like when usings come in and some other things fall out of what modern C# is. And now there's a better way to do things. Now, we don't tell you to put usings on top of the file anymore, and we don't have guidelines about whether to prune or not ... that's just gone out of modern C# and now a modern C# programmer doesn't do that anymore.  A sliding window that's current C# might help as long as we don't make it a worse experience to use all of the language. But maybe we can make it an even better experience to use the modern part of the language and can give you more guidance if you're not a veteran or if you don't want to chase down all the reasons for doing one or the other and you just want a recommendation—how should I be doing this? This is what we're currently working on.

I found this very interesting. A C# subset is not a new idea but I think Microsoft working on it is. In the past, I've seen some Microsoft resistance to that idea. It's early days but promising for the future of C#.


Custom deployment layouts for Blazor Web Assembly apps

This week, on the ASP.NET Blog, Javier Calvarro Nelson wrote about new .NET 6 extensibility points that will allow development teams to customize which files to package and publish with Blazor WebAssembly apps. The idea is that you could reuse this as a NuGet package. As Javier notes, this is important because some environments block and download the execution of DLLs from the network for security reasons.

The article lays out how to build the package yourself, but you don't have to. According to Daniel Roth, you can use a prebuilt experimental NuGet package to try it out. To use it, add a reference from your client project, update the server project to add the endpoint, and then publish your app. This is a good first step—hopefully, the end goal is to provide this out of the box.


🌎 Last week in the .NET world

📢 Announcements

📅 Community and events

🌎 Web development

⛅ The cloud

📔 Languages

🔧 Tools

🏗 Design, testing, and best practices

🎤 Podcasts and videos

]]>
<![CDATA[ Exploring C# 10: Save Space with File-Scoped Namespaces ]]> https://www.daveabrock.com/2021/10/05/csharp-10-file-scoped-namespaces/ 615bb50f069a18003d6d401a Mon, 04 Oct 2021 22:35:27 -0500 .NET 6 and C# 10 hit general availability next month (in November 2021). Much like I did with C# 9 last year, I'd like to write about C# 10 over my next few posts. First, let's talk about a simple yet powerful capability: file-scoped namespace declarations.

If you're new to C# and aren't familiar with what namespaces are, you can use a namespace keyword to declare scopes that contain a set of related objects. What kind of objects? According to the Microsoft documentation, namespaces can have classes, interfaces, enums, structs, or delegates. By default, the C# compiler adds a default, unnamed namespace for you—typically referred to as the global namespace. When you declare a specific namespace, you can use identifiers present in the global namespace. See the C# docs for details.

So, how can you declare namespaces in C# 9 and earlier? Keeping with my superhero theme with my C# 9 posts, a Superhero.cs class would look like this:

using System;

namespace SuperheroApp.Models
{
   public class Superhero
   {
      public string? FirstName { get; set; }
      public string? LastName { get; set; }
      public string? StreetAddress { get; set; }
      public string? City { get; set; }
      public int? MaxSpeed { get; set; }
   }
}

Even with this simple example, you can see how much space this adds—both horizontally and vertically—with the indents and curly braces. With C# 10, you can make this cleaner with file-scoped namespaces.

Use C# 10 file-scoped namespaces to simplify your code

With C# 10, you can remove the curlies and replace it with a semicolon, like this:

namespace SuperheroApp.Models;

public class Superhero
{
   public string? FirstName { get; set; }
   public string? LastName { get; set; }
   public string? StreetAddress { get; set; }
   public string? City { get; set; }
   public int? MaxSpeed { get; set; }
}

If you have using statements in your file, you can have the namespace declaration either before or after it. However, it must come before any other members in a file, like your classes or interfaces. Many code analyzers like StyleCop encourage the use of usings inside namespaces to avoid type confusion.

It's worth noting that as the "file-scoped" naming implies, this applies to any objects in the file—like classes, interfaces, structs, and so on. Also, you are limited to only defining one file-scoped namespace per file. You likely aren't surprised, as most of your files typically only have a single namespace.

Limitations with file-scoped namespaces

Now, let's review what you cannot do with file-scoped namespaces.

Mix traditional namespaces and file-scoped namespaces

If you want to mix a non-C# 10 namespace declaration and a file-scoped namespace, you can't. Here's what you'll see from Visual Studio.

Define multiple file-scoped namespaces

What if you want multiple namespaces in a file like this?

namespace SuperheroApp.Models;

public class Superhero
{
    public string? FirstName { get; set; }
    public string? LastName { get; set; }
    public string? StreetAddress { get; set; }
    public string? City { get; set; }
    public int? MaxSpeed { get; set; }
}

namespace SuperheroApp.Models.Marvel;

public class MarvelSuperhero : Superhero
{
    public string[] MoviesIn { get; set; }
}

As discussed previously, you can only define one file-scoped namespace per file. It won't work.

If you want to accomplish this, you'll need to stay old school and define namespace declarations as you usually have.

namespace SuperheroApp.Models
{
    public class Superhero
    {
        public string? FirstName { get; set; }
        public string? LastName { get; set; }
        public string? StreetAddress { get; set; }
        public string? City { get; set; }
        public int? MaxSpeed { get; set; }
    }
}

namespace SuperheroApp.Models.Marvel
{
    public class MarvelSuperhero : Superhero
    {
        public string[] MoviesIn { get; set; }
    }
}

Nesting namespaces

With traditional namespaces, you can nest them like in the following example.

namespace SuperheroApp.Models
{
    public class Superhero
    {
        public string? FirstName { get; set; }
        public string? LastName { get; set; }
        public string? StreetAddress { get; set; }
        public string? City { get; set; }
        public int? MaxSpeed { get; set; }
    }

    namespace SuperheroApp.Models.Marvel
    {
        public class MarvelSuperhero : Superhero
        {
            public string[] MoviesIn { get; set; }
        }
    }
}

However, you can not do this with file-scoped namespaces. When I try to do this, I'll get another compiler error.

Wrapping up: another way to simplify C# code

If you've been following the last couple of C# releases, you've noticed this isn't the first change to simplify boilerplate code—see positional C# records and top-level statements for a few examples—and it likely won't be the last, either. Changes like this allow you to simplify your programs and cut out boilerplate.

In the next post, I'll continue this theme by writing about global using directives in C# 10. I'll talk to you then.

]]>
<![CDATA[ The .NET Stacks #64: ⚡ Looking at Functions support in .NET 6 ]]> https://www.daveabrock.com/2021/10/03/dotnet-stacks-64/ 615083d9f55adf003bbe52fd Sun, 03 Oct 2021 11:26:27 -0500 Welcome to another week. I hope you had a good weekend. Before getting to the community links, I've got some quick items for you this week:

  • Azure Functions .NET 6 support
  • .NET Foundation Board election news
  • Syncing namespaces in Visual Studio
  • Last week in the .NET world

Azure Functions .NET 6 support

Last week, Anthony Chu announced the availability of the public preview for Azure Functions 4.0—which includes .NET 6 support. In case you weren't aware, Azure Functions has two programming models: in-process and isolated. Azure Functions 4.0 supports .NET 6 through the isolated model.

As Anthony states, the isolated model gives you greater control over Functions configuration and allows you to use DI and middleware as you do in your ASP.NET Core apps. Currently, you can only use this from CLI tooling. Soon, there will be support for Visual Studio and Visual Studio closer to the general .NET 6 release date in November (which is when Azure Functions 4.0 will be off public preview and generally available).

Since we're talking about Azure Functions, long-term the in-process model will be retired in favor of the isolated model. As you can see from this diagram I stole from Microsoft, that won't happen until .NET 7 in November 2022.


.NET Foundation Board election news

The results are out for the 2021 .NET Foundation Board elections. Congratulations to the following folks who made the cut:

  • Mattias Karlsson
  • Frank Odoom
  • Rob Prouse
  • Javier Lozano (re-elected)

Syncing namespaces in Visual Studio

As Oleg Kyrylchuk mentions, in the last preview of Visual Studio 2022 you can syncronize namespaces to mirror how the folder structure looks in Solution Explorer. This is in preview, but looks to be a cool feature—and should be done with caution, as there isn't currently a way to undo easily.


🌎 Last week in the .NET world

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts and videos

]]>
<![CDATA[ The .NET Stacks #63: 🗞 .NET 6 is for real now ]]> https://www.daveabrock.com/2021/09/26/dotnet-stacks-63/ 6145e9aa440b9d003d0fdafb Sun, 26 Sep 2021 01:00:00 -0500 Happy Monday! This week, we're talking about .NET 6 RC1 and some other topics. Oh, and if you're using Travis CI, maybe you shouldn't.


🔥 My favorites from last week

Announcing .NET 6 Release Candidate 1
Richard Lander

ASP.NET Core updates in .NET 6 Release Candidate 1
Daniel Roth

Migration to ASP.NET Core in .NET 6
David Fowler

Update on .NET Multi-platform App UI (.NET MAUI)
Scott Hunter

Visual Studio 2022 Preview 4 is now available!
Mads Kristensen

As expected, .NET 6 Release Candidate (RC) 1 rolled out last week. It's the first of two RC releases deemed "go-live" ready and supported in production. .NET 6 has been feature-complete for a while now, and as Richard Lander states, for this release the "team has been focused exclusively on quality improvements that resolve functional or performance issues in new features or regressions in existing ones." Still, the blog post is worth a read to understand .NET 6's foundational features.

As for the ASP.NET Core side of things, we're seeing quite a bit. The team is rolling out the ability to render Blazor components from JS, Blazor custom elements (on an experimental basis), the ability to generate Angular and React components with Blazor, .NET to JavaScript streaming, and template improvements—including the ability to opt-in to implicit usings—and much more.

Of course, it wouldn't be an ASP.NET Core release without talking about Minimal APIs. RC1 brings a lot of updates: better support for OpenAPI for defining metadata, parameter binding improvements (like allowing optional parameters in endpoint actions), and the ability to use multiple calls to UseRouting to support more middleware. David Fowler also dropped a nice guide for migrating ASP.NET Core to .NET 6. You'll want to check it out to understand how the new hosting model works.

Have a sad trombone ready? .NET MAUI will not be making it into .NET 6's official release in November, according to a Microsoft Scott. It's now looking like it'll be released in early Q2 of 2022. There was a lot to be done, and a slight delay beats a buggy release any day. You can also check out Scott's post for an overview on features rolled out with .NET MAUI Preview 8.

Going hand-in-hand with the .NET 6 releases, Visual Studio 2022 Preview 4 is now available. This release promises personal/team productivity improvements (like finding files) and thankfully a big update for the Blazor and Razor editors. You can now use VS 2022 to hot reload on file save in ASP.NET Core and also apply changes to CSS live. Check out the blog post for details.

📢 Announcements

HTTP/3 support in .NET 6
Sam Spencer

Along with all the other announcements last week, Sam Spencer writes about HTTP/3 support in .NET 6. As a refresher, Sam explains why HTTP/3 is important: "We have all gone mobile and much of the access is now from phones and tablets using Wi-Fi and cellular connections which can be unreliable. Although HTTP/2 enables multiple streams, they all go over a connection which is TLS encrypted, so if a TCP packet is lost all of the streams are blocked until the data can be recovered. This is known as the head of line blocking problem."

"HTTP/3 solves these problems by using a new underlying connection protocol called QUIC. QUIC uses UDP and has TLS built in, so it’s faster to establish connections as the TLS handshake occurs as part of the connection. Each frame of data is independently encrypted so it no longer has the head of line blocking in the case of packet loss."

The RFC for HTTP/3 is not yet finalized and subject to change, but you can start to play around with HTTP/3 and .NET 6 if you're up for getting your hands dirty. You can use the HttpClient as well if you enable a runtime flag. Check out the post for details.

More from last week:

📅 Community and events

Announcing the Candidates .NET Foundation Election 2021
Nicole Miller

This post isn't new but if you're a member of the .NET Foundation, they are looking to fill a couple of seats on the Board of Directors. Browse the blog post to learn more about the candidates (Nicole has interviewed each candidate, as well). Voting ends by the end of today (September 20) at 11:59 PST in the USA. (Disclaimer: I was on the Election Board and would love to hear feedback about the process if you have it. We aren't perfect but are trying!)

More from last week:

🌎 Web development

WebSocket per-message compression in ASP.NET Core 6
Tomasz Pęczek

New with ASP.NET Core 6, you can compress WebSocket messages. Tomasz Pęczek has been working with it so far and has a nice post with a GitHub repository you can reference. It ships with a WebSocketAcceptContext object, which includes a DangerousEnableCompression property. Why?

"You might be wondering why such a "scary" property name. It's not because things may fail. If the client doesn't support compressions (or doesn't support compression with specific parameters), the negotiated connection will have compression disabled. It's about security. Similarly to HTTPS, encrypted WebSocket connections are subject to CRIME/BREACH attacks. If you are using encryption, you should be very cautious and not compress sensitive messages."

Also from last week:

🔧 Tools

Advanced Git Workflow Tips
Khalid Abuhakmeh

Over at the JetBrains blog, Khalid Abuhakmeh writes about ways to make working with Git easier, armed with some JetBrains Rider tips as well. I learned about cleaning your repository of non-tracked artifacts by using git clean -xdf. An irreversible command, you can use it to prevent folder bloat that occurs in new repos when you're working with non-tracked files like dependencies and build items.

🏗 Design, testing, and best practices

⛅ The cloud

🥅 The .NET platform

📔 Languages

📱 Xamarin

🎤 Podcasts and Videos

]]>
<![CDATA[ The .NET Stacks #62: 👋 And we're back ]]> https://www.daveabrock.com/2021/09/19/dotnet-stacks-62/ 613ca82115458e004b3a6040 Sat, 18 Sep 2021 19:05:00 -0500 This is the web version of my weekly newsletter, The .NET Stacks, originally sent to email subscribers on September 13, 2021. Subscribe at the bottom of the post to get this right away!

Happy Monday! Miss me? A few of you said you have, but I'm 60% sure that's sarcasm.

As you know, I took the last month or so off from the newsletter to focus on other things. I know I wasn't exactly specific on why, and appreciate some of you reaching out. I wasn't comfortable sharing it at the time, but I needed to take time away to focus on determining the next step in my career. If you've interviewed lately, I'm sure you understand ... it really is a full-time job.  I'm happy to say I've accepted a remote tech lead role for a SaaS company here.

I'm rested and ready, so let's get into it! I'm trying something a little different this week—feel free to let me know what you think.


🔥 My favorite from last week

ASP.NET 6.0 Minimal APIs, why should you care?
Ben Foster

We've talked about Minimal APIs a lot in this newsletter and it's quite the hot topic in the .NET community. An alternative way to write APIs in .NET 6 and beyond, there's a lot of folks wondering if it's suitable for production, or can lead to misuse.

Ben notes: "Minimal simply means that it contains the minimum set of components needed to build HTTP APIs ... It doesn’t mean that the application you build will be simple or not require good design."

"I find that one of the biggest advantages to Minimal APIs is that they make it easier to build APIs in an opinionated way. After many years building HTTP services, I have a preferred approach. With MVC I would replace the built-in validation with Fluent Validation and my controllers were little more than a dispatch call to Mediatr. With Minimal APIs I get even more control. Of course if MVC offers everything you need, then use that."

In a similar vein, Nick Chapsas has a great walkthrough on strategies for building production-ready Minimal APIs. No one expects your API to be in one file, and he shows practical ways to deal with dependencies while leveraging minimal API patterns. Damian Edwards has a nice Twitter thread, as well. As great as these community discussions are, I really think the greatest benefit is getting lost: the performance gains.


📅 Community and events

Increasing developer happiness with GitHub code scanning
Sam Partington

If you work in GitHub, you probably already know that GitHub utilizes code scanning to find security vulnerabilities and errors in your repository. Sam Partington writes about something you might not know: they use CodeQL—their internal code analysis engine—to protect themselves from common coding mistakes.

Here's what Sam says about loopy performance issues: "In addition to protecting against missing error checking, we also want to keep our database-querying code performant. N+1 queries are a common performance issue. This is where some expensive operation is performed once for every member of a set, so the code will get slower as the number of items increases. Database calls in a loop are often the culprit here; typically, you’ll get better performance from a batch query outside of the loop instead."

"We created a custom CodeQL query ... We filter that list of calls down to those that happen within a loop and fail CI if any are encountered. What’s nice about CodeQL is that we’re not limited to database calls directly within the body of a loop―calls within functions called directly or indirectly from the loop are caught too."

You can check out the post for more details and learn how to use these queries or make your own.

More from last week:


🌎 Web development

How To Map A Route in an ASP.NET Core MVC application
Khalid Abuhakmeh

If you're new to ASP.NET Core web development, Khalid put together a nice post on how to add an existing endpoint to an existing ASP.NET Core MVC app. Even if you aren't a beginner, you might learn how to resolve sticky routing issues. At the bottom of the post, he has a nice checklist you should consider when adding a new endpoint.

More from last week:


🥅 The .NET platform

Using Source Generators with Blazor components in .NET 6
Andrew Lock

When Andrew was upgrading a Blazor app to .NET 6, he found that source generators that worked in .NET 5 failed to discover Blazor components in his .NET 6 app because of changes to the Razor compilation process.

He writes: "The problem is that my source generators were relying on the output of the Razor compiler in .NET 5 ... My source generator was looking for components in the compilation that are decorated with [RouteAttribute]. With .NET 6, the Razor tooling is a source generator, so there is no 'first step'; the Razor tooling executes at the same time as my source generator. That is great for performance, but it means the files my source generator was relying on (the generated component classes) don't exist when my generator runs."

While this is by design, Andrew has a great post underlying the issue and potential workarounds.

More from last week:


⛅ The cloud

Minimal Api in .NET 6 Out Of Process Azure Functions
Adam Storr

With all this talk about Minimal APIs, Adam asks: can I use it with the new out-of-process Azure Functions model in .NET 6?

He says: "Azure Functions with HttpTriggers are similar to ASP.NET Core controller actions in that they handle http requests, have routing, can handle model binding, dependency injection etc. so how could a 'Minimal API' using Azure Functions look?"

More from last week:


🔧 Tools

New Improved Attach to Process Dialog Experience
Harshada Hole

With the 2022 update, Visual Studio is improving the debugging experience—included is a new Attach to Process dialog experience.

Harshada says: "We have added command-line details, app pool details, parent/child process tree view, and the select running window from the desktop option in the attach to process dialog. These make it convenient to find the right process you need to attach. Also, the Attach to Process dialog is now asynchronous, making it interactive even when the process list is updating." The post walks through these updates in detail.

More from last week:


🏗 Design, testing, and best practices

Ship / Show / Ask: A modern branching strategy
Rouan Wilsenach

Rouan says: "Ship/Show/Ask is a branching strategy that combines the features of Pull Requests with the ability to keep shipping changes. Changes are categorized as either Ship (merge into mainline without review), Show (open a pull request for review, but merge into mainline immediately), or Ask (open a pull request for discussion before merging)."

More from last week:


🎤 Podcasts and Videos

]]>
<![CDATA[ Exploring Fiddler Jam: Get Your Time Back by Solving Issues Faster ]]> https://www.daveabrock.com/2021/08/25/exploring-fiddler-jam/ 611945c2dab7b3003b9e98a8 Wed, 25 Aug 2021 08:38:00 -0500 This article was originally posted on the Telerik Developer Blog.

Your most important resource is time. When non-technical users encounter issues and need your help, how much of this important resource are you spending?

We've all been there: a user reports an issue that often doesn't have the context and details you would like—for example, "the site is slow"—and your support and engineering teams eat up a lot of their time on the back and forth. You might ask the user to send along screenshots of the issue. You might have to get on a call to reproduce the issue—and risk an "it worked differently before" situation. In addition, passing around insecure data can lead to security and compliance issues with your organization.

This painful process provides headaches for everyone. End users don't have an easy way to submit logs securely, support staff wastes a lot of back-and-forth time to reproduce the issue, and engineering teams have trouble understanding the context of the issue and reproducing it successfully.

Have you heard about Fiddler Jam? A relative newcomer to the Telerik Fiddler suite of products, Jam is a solution to help support and development teams troubleshoot remote issues with web applications in a quick, easy and secure fashion. How does it work? Users of your site can use a Chrome extension to share the full context of the issue—capturing network logs, browser console logs and screenshots—and the support team can analyze the data immediately. If needed, developers can reproduce and debug the issue using the Fiddler Everywhere debugging proxy.

When I first learned about Fiddler Jam, I had a question that you might be asking yourself, too: in such a busy market, why should I use Fiddler Jam when I have free tools like a browser's developer tools? In this post, I'll make the case for Fiddler Jam in three different ways.

Users Can Easily Send Issue Information

When getting issue details from a user, you'll lack appropriate context—whether users call you, open a support ticket or email screenshots. You're now left with putting together a puzzle from what you're told and what your telemetry tells you. While it's relatively easy to know when an API service is down, what about a stubborn click event that isn't logged?

With Fiddler Jam, it's easy to allow non-technical users to provide context around the issue. In many cases, users can become frustrated to spend so much time reproducing an issue for you when they are already frustrated that things aren't working. With a couple of button clicks, users can provide the complete context of their interaction with your web app.

After asking users to download the Fiddler Jam Chrome extension—which you can use with Google Chrome and Microsoft Edge—users will see a Fiddler Jam icon next to the browser's address bar. Users can then click the icon to start capturing the issue.

Robust Security Practices are Built-In

By default, Fiddler Jam also handles the work of sending data securely. If you review the Advanced Options in the Chrome extension, you'll see a few options you can configure based on your security needs. All options are enabled by default.

  • Take screenshots while capturing - This option adds a screenshot from the active Chrome tab. If the screen shows sensitive data, consider disabling it.
  • Capture console - This option records any developer console outputs. Again, consider disabling this if your logs have sensitive information.
  • Mask cookies - This option masks cookie values so that they aren't visible in the logs. The cookie names will still be visible.
  • Mask post data - This option masks data sent along in a POST request, like form information.
  • Disable cache - This option asks the browser to skip network requests cache and downloads fresh content from the server

After users capture a session, they can share the session using a public link or Password Protection.

The password-protected logs are secured with AES-CTR encryption, and all logs are stored in the cloud. The Fiddler Jam team won't have access to (and can't recover) password-protected log content. See the Fiddler Jam documentation for details.

Use Powerful Tooling to Review the Entire Context Quickly

With Fiddler Jam, you can use various tooling options to make the most of the data. When a user shares a link, you'll be sent to the Telerik Fiddler Jam dashboard for a detailed view of the logs and information on the request—like headers and request/response bodies. Here's how it looks when I'm looking at a server error, where I have Something bad happened as a response. (I suppose it is an accurate statement but could use more investigation.)

By default, I can also look at user actions. I can look at the button press that initiated the call to the server.

In future versions, Fiddler Jam will take screenshots of more user actions like inputs and scroll events. These actions are stored in the cloud at no additional cost to you. It's one of my favorite features. There's no guessing of what a user said they did and what really happened. It's all here.

The data comes in the format of an HTTP Archive (HAR) file, which is an industry-standard JSON-formatted file that saves HTTP performance data. You can download the HAR file for further inspection and even load the HAR file for mocking. To be clear, this is not the mocking my team does to me when I introduce a bug to the codebase. Instead, Jam sends the data to the Chrome extension, where you can load a session in the browser.

Finally, the engineering team can open the session in Fiddler Everywhere for further investigation. With Fiddler Everywhere's powerful capabilities, you can even replay requests from a user's session!

Summary

In this post, I've shown you how Fiddler Jam provides robust capabilities to help you reproduce and resolve issues quickly. More than just another set of developer tools, it's a full end-to-end troubleshooting solution designed to work well for users, support, and engineering alike. Get started with Fiddler Jam today by going to the Fiddler Jam site and signing up for a free 14-day trial.

]]>
<![CDATA[ The .NET Stacks #61: 🚌 Let's go on a community blog tour ]]> https://www.daveabrock.com/2021/08/15/dotnet-stacks-61/ 61106fdadb671e003e96a283 Sun, 15 Aug 2021 11:48:00 -0500 Would you look at that — it's Monday again. I hope you had a wonderful weekend. Here's what we have going on this week.

  • Let's go on a community blog tour
  • Did you know?
  • A housekeeping note
  • Last week in the .NET world

Let's go on a community blog tour

Every week, I provide a big list of quick links for you to check out. I have them this week as I always do, but I wanted to also talk about five or so community blog posts this week. (To be clear, there are always many more great posts from the community that deserves special recognition. These are just some posts that caught my eye this week.)

--

My favorite post this week comes from Josef Ottoson, who describes working with sorting really large files in C#. If you think algorithms have limited real-world applications, think again: after starting with an in-memory application, he compares external merge sort, splitting, and merging. He shows off how he only consumes 32 megabytes when sorting with an external merge sort algorithm, over almost 6 gigabytes with an in-memory solution. (As expected, sorting in memory is a little bit faster.)

--

On the heels of the "Focus on F" .NET Conf, F# creator (and the creator of C# 8 and 9 😉) Don Syme has a Github readme called What operators and symbols does an F# programmer need to know? As someone who's been wanting to learn F# for the last few years now, I appreciate the candor: as with C#, there's much of the language that you don't need on a daily basis.

--

Over at the Visual Studio Code blog, Chris Dias writes about the rise of notebooks. School is starting back up soon, but I'm not talking about those: here I'm talking about docs that contain text, executable code, and its output. It provides interesting use cases for learning and development.

With VS Code, the experience in working with notebooks is a big concern:

To complete the metaphor, notebooks in VS Code have matured from those awkward teen years into (young) adulthood, confident and strong, with a bright future. Working with VS Code notebooks may take a bit of adjustment if you are moving from Jupyter, but we hope it will be worth it in the end. And, as we always try to do, you can customize the experience through settings.

--

We've geeked out about .NET source generators quite a few times in this space. To that end, last week Khalid Abuhakmeh writes about using them to retrieve class declarations from a project. He uses a ISyntaxReceiver, which retrieves the class declarations.

--

Jeremy Miller wrote a nice post about testing effectively with or without mocks or stubs. Are you spending most of your time writing effective tests or simulating data with mocks to avoid writing integration tests? With the advancements in networking and containerization, does it need to be an either/or discussion?

He writes:

In the past, we strongly preferred writing “solitary” tests with or without mock objects or other fakes because those tests were reliable and ran fast. That’s still a valid consideration, but I think these days it’s much easier to author more “socialable” tests that might even be using infrastructure like databases or the file system than it was when I originally learned TDD and developer testing. Especially if a team is able to use something like Docker containers to quickly spin up local development environments, I would very strongly recommend writing tests that work through the data layer or call HTTP endpoints in place of pure, Feathers-compliant unit tests.

Did you know?

Did you know that in Visual Studio Code, you can make Integrated Terminal sessions appear as editor tabs, instead of taking up valuable space in the panel? Not me.


Quick housekeeping note

A quick note: The .NET Stacks is taking a brief break until September. Over the next several weeks, my free time will be extremely limited and I won't be able to devote the time. Thanks as always for your support and I'll be back next month.


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

Over at JetBrains, the v2021.2 releases for dotCover, dotMemory, dotTrace, and dotPeek have arrived. Alexandra Kolesova writes about updates to Rider 2021.2, and Alexander Kurakin writes about updates to ReSharper 2021.2.

📅 Community and events

🌎 Web development

🥅 The .NET platform

📔 Languages

🔧 Tools

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks, #60: 📝Logging improvements in .NET 6 ]]> https://www.daveabrock.com/2021/08/08/dotnet-stacks-60/ 6106bb6f2fff1d003b81adab Sun, 08 Aug 2021 12:14:00 -0500 Happy Monday! It's hard to believe it's August already. What are we talking about this week?

  • One big thing: Logging improvements in .NET 6
  • The little things: Notes from the community
  • Last week in the .NET world

One big thing: Logging improvements in .NET 6

Over the last few months, we've highlighted some big features coming with .NET 6. Inevitably, there are some features that fly under the radar, including various logging improvements that are coming. Let's talk about a few of them.

Announced with Preview 4 at the end of May, .NET will include a new Microsoft.Extensions.Logging compile-time source generator. The Microsoft.Extensions.Logging namespace will expose a LoggerMessageAttribute type that generates "performant logging APIs" at compile-time. (The source code relies on ILogger, thankfully.)

From the blog post:

The source generator is triggered when LoggerMessageAttribute is used on partial logging methods. When triggered, it is either able to autogenerate the implementation of the partial methods it’s decorating, or produce compile-time diagnostics with hints about proper usage. The compile-time logging solution is typically considerably faster at run time than existing logging approaches. It achieves this by eliminating boxing, temporary allocations, and copies to the maximum extent possible.

Rather than using LoggerMessage APIs directly, the .NET team says this approach offers simpler syntax with attributes, the ability to offer warnings as a "guided experience," support for more logging parameters, and more.

As a quick example, let's say I can't talk to an endpoint. I'd configure a Log class like this:

public static partial class Log
{
    [LoggerMessage(EventId = 0, Level = LogLevel.Critical, Message = "Could not reach `{uri}`")]
    public static partial void CouldNotOpenSocket(ILogger logger, string uri);
}

There is much, much more at the Microsoft doc, Compile-time logging source generation.

On the topic of HTTP requests, ASP.NET Core has a couple of new logging improvements worth checking out, too. A big one is the introduction of HTTP logging middleware, which can log information about HTTP requests and responses, including the headers and body. According to the documentation, the middleware logs common properties like path, query, status codes, and headers with a call to LogLevel.Information. The HTTP logging middleware is quite configurable: you can filter logging fields, headers to log (you need to explicitly define the headers to include), media type options, and a log limit for the response body.

This addresses a common use case for web developers. As always, you'll need to keep an eye on possible performance impacts and mask logging sensitive information (like PII data).

Lastly, with .NET 6 Preview 5 the .NET team rolled out subcategories for Kestrel log filtering. Previously, verbose Kestrel logging was quite expensive as all of Kestrel has shared the same category name. There are new categories on top of *.Server.Kestrel, including BadRequests, Connections, Http2, and Http3. This allows you to be more selective on which rules you want to enable. As shown in the blog post, if you only want to filter on bad requests, it's as easy as this:

{
  "Logging": {
    "LogLevel": {
      "Microsoft.AspNetCore.Kestrel.BadRequests": "Debug"
    }
  }
}

If you want to learn more about ASP.NET Core logging improvements in .NET 6, Maryam Ariyan and Sourabh Shirhatti will be talking about it in Tuesday's ASP.NET standup.


The little things: Notes from the community

AutoMapper is a powerful but self-proclaimed "simple little library" for mapping objects. From Shawn Wildermuth on Twitter, Dean Dashwood has written about Three Things I Wish I Knew About AutoMapper Before Starting. It centers around subtleties when working with dependency injection in models, saving changed and nested data, and changing the models. It's actually a GitHub repo with a writeup and some code samples—highly recommended.


Sometimes one sentence does the trick: Cezary Piątek has a nice MsBuild cheatsheet on GitHub.


This week, Mike Hadlow wrote about creating a standalone ConsoleLoggerProvider in C#. If you want to create a standalone provider to avoid working with dependency injection and hosting, Mike found an ... interesting ... design. He develops a simple no-op implementation.


Well, it's been 20 years since the Agile Manifesto, and Al Tenhundfeld celebrates it with the article Agile at 20: The Failed Rebellion. I think we can all agree that (1) everybody wants to claim they are Agile, and (2) Agile is a very, um, flexible word—nobody is really doing Agile (at least according to the Manifesto). He explores why; an interesting read.


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #59: 🎇 When your Copilot is mostly right ]]> https://www.daveabrock.com/2021/08/01/dotnet-stacks-59/ 60fac0a18d4bf3004832502b Sun, 01 Aug 2021 10:53:00 -0500 Welcome to another week, all. I hope your Monday is off to a good start.

If you follow me on Twitter, you may have heard I got attacked by six bees last week. I'm fine but appreciate some of you reaching out with comments about whether I'm still "buzzing around somewhere" or "minding my own beeswax" or "ooh, that stings" or even sending memes of a guy with a beard made of bees.

Here's what hive I've got for you this week:

  • One big thing: Thoughts on GitHub Copilot
  • The little things: A new source generator for JSON serialization, F# getting some love
  • Last week in the .NET world

One big thing: Thoughts on GitHub Copilot

Have you caught all the buzz about GitHub Copilot? It's been out in private preview for about a month and I've been checking it out. Marketed by GitHub as "your AI pair programmer," it provides predictions for your code—either as a single line or even an entire function. It's based on GPT-3, a "deep-learning model that uses deep learning to produce human like text."

To quote Colin Ebehardt:

Copilot is based on Codex, a new model which is a derivative of GPT3, which has been trained on vast quantities of open source code from GitHub. It is integrated directly with VSCode in order to generate suggestions based on a combination of the current context (i.e. your code), and the ‘knowledge’ it has acquired during the training process.

There's a ton of examples online from amazed developers. For example, you can enter a comment like // Call GitHub API and Copilot will create a function that calls the GitHub API for you. Of course, this led to some snarky comments about AI taking our jobs. If you're concerned about that, I'd ask you to look at some of the code Copilot spits out. It isn't perfect, but it learns from you ... and does an OK job. To quote philosopher Brian Fantana: "60% of the time, it works every time."

Copilot has massive potential to improve developer workflow and efficiency. For example, how many times do you have to ask Google how to do something you haven't done in a while? Let's say you haven't called HttpClient in a while, or you want to read from a CSV file. You search how to do it, find official docs or Stack Overflow, and go back to your editor and do your best. What if you can cut out all that manual work and generate that code in a few keystrokes? You can focus on the problem you're solving and not on "how do I" tasks. As a smart and experienced developer, you'll know that you'll need to understand the code and tweak it as needed. I do worry about those who won't. It's been easy to blindly introduce code since the invention of the Internet, of course, but doing it from your editor cranks it up a few notches.

From Matthew MacDonald's excellent piece on Copilot:

As long as you need to spend effort reviewing every piece of suggested code, and as long as there is no way to separate a rock-solid implementation from something fuzzier and riskier, the value of Copilot drops dramatically. Not only will it save less time, but it risks introducing elementary mistakes that a less experienced programmer might accept, like using float for a currency variable.

This offering—which will eventually be sold by Microsoft—spends a lot of the reputational capital Microsoft has been building with developers since they started embracing open-source software. While it fits with Microsoft's mission to own the developer experience—from GitHub and NPM to Azure and VS Code, just to name a few—I've seen some Microsoft skeptics viewing this as the company cashing in on the goodwill with the community. While you can't be shocked to see a company mining its free service's public data for eventual profit, there are quite a few licensing and legal challenges that need further clarification (saying "the code belongs to you" in its FAQs might not get by a company's legal department).

Even so, I think Copilot will eventually turn into a powerful tool that can help to automate the painful parts of programming.


The little things: A new source generator for JSON serialization, F# getting some love

While it's easy to highlight the new .NET stuff like hot reload and AOT compilation, it's just as important to write about the stuff we do every day: like serializing and deserializing JSON. Is it exciting? No. It is fun? Also no. It's something we do all the time, though—so it was nice to see Microsoft roll out a new System.Text.Json source generator.

We've talked about source generators before, but a refresher: its code that runs during compilation and can produce additional files that are compiled together with the rest of your code. It’s a compilation step that generates code for you based on your code. It creates a lot of possibilities. This provides an alternative for expensive reflection code, which traditionally leads to slow startup and memory issues.

This new source generator works with JsonSerializer (and does not break existing implementations, don't worry). From the post on the topic:

The approach the JSON source generator takes to provide these benefits is to move the runtime inspection of JSON-serializable types to compile-time, where it generates a static model to access data on the types, optimized serialization logic using Utf8JsonWriter directly, or both.

In the end, moving much of this from runtime to compile-time will help with throughput, memory, and assembly trimming capabilities. The trimming is a big one. The generator can shed a lot of unused code and dependencies.

I think the post does a good job of outlining the capabilities and the rationale for introducing yet another set of APIs.


It's been nice to see F# getting some love lately. Microsoft's latest .NET Conf will be Focus on F#, which will take place on Thursday. Last week, the .NET Docs Show held an F# AMA and Microsoft introduced its first F# Learn modules. Even if you're a C# lifer, it's hard to argue against the impact of functional languages such as F#. As C# has spent the last few major releases introducing constructs like pattern matching, records, and features that focus on immutability, these are features languages like F# have enjoyed for years.

As for Focus on F#, there's a lot of nice stuff on the agenda. I'm looking forward to seeing F# creator Don Syme teaching Python creator Guido van Rossum F# from scratch. For more information on F# and its context within the C# ecosystem, I recommend brushing up on my interviews last year with Isaac Abraham and Phillip Carter.


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

Mike Hadlow creates an Microsoft.Extensions.DependencyInjection object graph writer.

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #58: 📃 6 things about .NET 6 Preview 6 ]]> https://www.daveabrock.com/2021/07/25/dotnet-stacks-58/ 60f1f46c8846a6003bd4f480 Sun, 25 Jul 2021 13:36:00 -0500 Welcome to another wonderful week. I just learned you can put a timestamp in Notepad by hitting F5. I saw this by reading a blog post from 2002. How's your Monday?

Here's what we have going on in #58:

  • One big thing: 6 things about .NET 6 Preview 6
  • The little things: Visual Studio 2022 Preview 2, abstracting Kubernetes
  • Last week in the .NET world

One big thing: 6 things about .NET 6 Preview 6

Last week, the .NET team rolled out .NET 6 Preview 6. Richard Lander has the main blog post.  Dan Roth writes about ASP.NET Core updates, Jeremy Likness updates us on EF Core 6, and David Ortinau covers .NET MAUI. Preview 6 is the second to last preview release of .NET 6. It appears to be a smaller release, with Preview 7 being bigger, as it's the last preview release.  .NET 6 will end up with seven preview releases, then two RC releases (the latter of which should come with go-live licenses).

The releases will be winding down soon, as Richard Lander says: "We’ll soon be addressing only the most pressing feedback, approaching the same bug bar that we use for servicing releases. If you’ve been holding on to some feedback or have yet to try .NET 6, please do now. It’s your last chance to influence the release."

If you want the full details, you can check out the links above—as for me, I'd like to highlight six things that caught my eye.

Sync-over-async thread performance. The sync-over-async pattern—which means using synchronous functionality asynchronously—is a common source of performance degradation and thread starvation. A new change in Preview 6 will improve the rate of default thread injection when the only blocking work on the thread pool is sync-over-async.

New functionality for working with parameters in ASP.NET Core. The ASP.NET Core team released two new features that help when working with components in ASP.NET Core.

The first involves required Blazor component parameters. You do this by using a new [EditorRequired] attribute, like this:

[EditorRequired]
[Parameter]
public string Title { get; set; }

This functionality is enforced at design time and when building—it's important to note it isn't enforced at runtime and won't guarantee the value of the parameter won't be null.

The next feature now supports optional parameters for view component tag helpers in ASP.NET Core. With this update, you no longer need to specify a value for an optional parameter as a tag helper attribute.

WebSocket compression. ASP.NET Core now supports WebSocket compression. This can yield tremendous performance benefits with comes with a ton of caveats. It's disabled by default because enabling it over encrypted connections can make your app subject to attacks. As such, you should only enable it when you know sensitive information isn't being passed around. If that isn't clear enough, the setting is called DangerousEnableCompression. No, seriously.

Minimal API improvements. We've talked about .NET 6 Minimal APIs quite a bit—so much you're probably sick of me talking about them. Even so, the team keeps building them out and Preview 6 brings some enhancements.

First, you can now integrate those APIs with OpenAPI (Swagger) support. You can do this using dependency injection (DI) or middleware. Here's an example using the DI method:

builder.Services.AddEndpointsApiExplorer();

builder.Services.AddSwaggerGen(c =>
{
    c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Description = "My Swagger", Version = "v1" });
});

Second, you can now inject services into the [FromServices] attribute. Previously, you needed to do something like this:

app.MapGet("/movies", async ([FromServices] MovieDbContext db) =>
{
    return await db.Movies.ToListAsync();
});

 And now it's this:

app.MapGet("/movies", async (MovieDbContext db) =>
{
    return await db.Movies.ToListAsync();
});

Pre-convention model configuration. From Preview 6 of EF Core, they've released pre-convention model configuration. From the announcement, Jeremy Likness says it's part of "finding ways to enhance model building so that it can more efficiently figure out what is an entity type and what is not." As more types are mapped, it can be painful to exclude types as entity types (and bringing all that overhead into a model) and reverting types from being entity types in certain situations.

You can now use a ConfigureConventions override to avoid the hassle of configuring every entity. In Jeremy's example, if you want to always store strings as byte arrays, you can use the override instead:

protected override void ConfigureConventions(ModelConfigurationBuilder configurationBuilder)
  {
      configurationBuilder.Properties<string>()
          .HaveConversion<byte[]>()
          .HaveMaxLength(255);

      configurationBuilder.IgnoreAny<INonPersisted>();
  }

.NET MAUI packaged as workload installation. With .NET 6, Xamarin is merging into the .NET Multi-platform UI (MAUI). MAUI is aiming to be a cross-platform framework for creating mobile and desktop apps with C# and XAML. I don't mention it much here in the newsletter for two reasons—I can't speak about it intelligently because I have no experience with it, and I'm allergic to XAML. The last time I opened a XAML file I got a fever and was in bed for three days.

Even so, there's a lot of great work going into it, and Preview 6 marks the first time MAUI will be shipped as a workload installation. Like the rest of .NET, you can use SDK workloads to enable specific workloads. In MAUI's case, the team is introducing maui, maui-mobile, and maui-desktop workloads, and soon Visual Studio 2022 will include these workloads in the installer.

MAUI is getting close to feature-complete for .NET 6. In this release, the team is rolling out gesture recognizers, clipping region enhancements, and native alerts. Check out the announcement to learn more.


The little things: Visual Studio 2022 Preview 2, abstracting Kubernetes

Since we're on the topic of previews, Visual Studio 2022 Preview 2 is out. Preview 2 is fully localized and ships with over a dozen language packs.

In this update, VS 2022 includes new "Live Preview" experiences with XAML and web apps, which allows you to kiss goodbye to the ancient-feeling recompile-and-run workflow. They are also touting a new Web Live Preview, to allow you to speed up your workflow, with instant updates when working with CSS or data controls. (I suppose we're happy to have it, even if it's an expected feature from modern IDEs these days.) Lastly, this update introduces Force Run, a way to run your app to a specific point that ignores other breakpoints or exceptions. (If you work on C++, check out the post. C++ isn't a .NET language, so I don't write about it here.)


When I see click-baity articles with headlines like Why Developers Should Learn Kubernetes, I can't help but cringe. Thankfully, the article is better than the headline suggests—but even so, it's a lot to suggest developers should know the ins and outs of pods, containers, networking, DNS, and load balancing. There's a difference between asking developers to Shift Left, and making them become SRE experts.

Kubernetes gets a lot of grief: it can be extreme overkill and it incurs a steep learning curve. Even so, containerization (and orchestrating these containers) is here to stay, but we should be providing developers with more abstractions over Kubernetes to make it a more PaaS-like experience. Microsoft is working on their part, and don't be surprised to see Azure (and other clouds) make investments in this space.


🌎 Last week in the .NET world

🔥 The Top 4

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Unhandled Exceptions in Blazor Server with Error Boundaries ]]> https://www.daveabrock.com/2021/07/21/blazor-error-boundaries/ 60d136759caef9003b75e3f9 Wed, 21 Jul 2021 09:54:52 -0500 This post originally appeared on the Telerik Developer Blog.

In Blazor—especially Blazor Server—there is no such thing as a small unhandled exception. When Blazor Server detects an unhandled exception from a component, ASP.NET Core treats it as a fatal exception. Why?

Blazor Server apps implement data processing statefully so that the client and server can have a "long-lived relationship." To accomplish this, Blazor Server creates a circuit server-side, which indicates to the browser how to respond to events and what to render. When an unhandled exception occurs, Blazor Server treats it as a fatal error because the circuit hangs in an undefined state, which can potentially lead to usability or security problems.

As a result, your app is as good as dead, loses its state, and your users are met with an undesirable An unhandled error has occurred message, with a link to reload the page.

In many cases, unhandled exceptions can be out of a developer's control—like when you have an issue with third-party code. From a practical perspective, it's not realistic to assume components will never throw.

Here's an example that a Blazor developer posted on GitHub: let's say I've got two components: MyNicelyWrittenComponent, which I wrote, and a third-party component, BadlyWrittenComponent, which I did not write. (Badly written components rarely identify themselves so well, but I digress.)

<div>
   <MyNicelyWrittenComponent>
   <BadlyWrittenComponent Name="null">
</div>

What if the third-party code is coded like this?

@code {
    [Parameter] public string Name { get; set; }
}
<p>@Name.ToLower()</p>

While I can reasonably expect BadlyWrittenComponent to blow up, the circuit is broken, and I can't salvage MyNicelyWrittenComponent. This begs the question: if I get an unhandled exception from a single component, why should my entire app die? As a Blazor developer, you're left spending a lot of time hacking around this by putting try and catch blocks around every single method of your app, leading to performance issues—especially when thinking about cascading parameters.

If you look at our front-end ecosystems, like React, they use Error Boundaries to catch errors in a component tree and display a fallback UI when a failure occurs for a single component. When this happens, the errors are isolated to the problematic component, and the rest of the application remains functional.

Fear no more: with .NET 6 Preview 4, the ASP.NET Core team introduced Blazor error boundaries. Inspired by Error Boundaries in React, it attempts to catch recoverable errors that can't permanently corrupt state—and like the React feature, it also renders a fallback UI.

From the Error Boundaries design document, the team notes that its primary goals are to allow developers to provide fallback UIs for component subtrees if an exception exists, allow developers to provide cleaner fallback experiences, and provide more fine-grained control over how to handle failures. This capability won't catch all possible exceptions but most common scenarios such as the various lifecycle methods (like OnInitializedAsync, OnParametersSetAsync, and OnAfterRenderAsync ,  and rendering use cases with BuildRenderTree.

In this post, I'll help you understand what Error Boundaries are—and just as importantly, what they aren't.

Get Started: Add Error Boundaries to the Main Layout

To get started, let's use a quick example. Let's say I know a guy—let's call him MAUI Man—that runs a surf shop. In this case, MAUI Man has a Blazor Server app where he sells surfboards and t-shirts. His developer, Ed, wrote a ProductService that retrieves this data.

Let's say the Surfboard API is having problems. Before the error boundary functionality was introduced, my component would fail to process the unhandled exception, and I'd see the typical error message at the bottom of the page.

Let's use the new ErrorBoundary component instead. To do this, navigate to the MainLayout.razor file. (If you aren't familiar, the MainLayout component is your Blazor app's default layout.) In this file, surround the @Body declaration with theErrorBoundary component. If an unhandled exception is thrown, we'll render a fallback error UI.

@inherits LayoutComponentBase

<div class="page">
    <div class="sidebar">
        <NavMenu />
    </div>

    <div class="main">
        <div class="content px-4">
          <ErrorBoundary>
            @Body
          </ErrorBoundary>    
        </div>
    </div>
</div>

On the home page where we call the Surfboard API, we see the default error UI.

The default UI does not include any content other than  An error has occurred. Even in development scenarios, you won't see stack trace information. Later in this post, I'll show you how you can include it.

This UI renders an empty <div> with a blazor-error-boundary CSS class. You can override this in your global styles if desired, or even replace it altogether. In the following example, I can customize my ErrorBoundary component. In this example, I include the @Body inside a RenderFragment called ChildContent. This content displays when no error occurs. Inside ErrorContent—which, for the record, is technically a RenderFragment<Exception>—displays when there's an unhandled error.

<ErrorBoundary>
  <ChildContent>
     @Body
  </ChildContent>
  <ErrorContent>
     <p class="my-custom-class">Whoa, sorry about that! While we fix this problem, buy some shirts!</p>
  </ErrorContent>
</ErrorBoundary> 

How else can you work with the ErrorBoundary? Let's explore.

Exploring the ErrorBoundary Component

Aside from the ChildContent and ErrorContent fragments, the out-of-the-box ErrorBoundary component also provides a CurrentException property—it's an Exception type you can use to get stack trace information. You can use this to add to your default error message (which should only be exposed in development environments for security reasons).

Most importantly, the ErrorBoundary component allows you to call a Recover method, which resets the error boundary to a "non-errored state." By default, the component handles up to 100 errors through its MaximumErrorCount property. The Recover method does three things for you: it resets the component's error count to 0, clears the CurrentException and calls StateHasChanged. The StateHasChanged call notifies components that the state has changed and typically causes your component to be rerendered.

We can use this to browse to our Shirts component when we have an issue with the Surfboard API (or the other way around). Since the boundary is set in the main layout, at first we see the default error UI on every page. We can use the component's Recover method to reset the error boundaries on subsequent page navigations.

@code {
    ErrorBoundary errorBoundary;

    protected override void OnParametersSet()
    {
        errorBoundary?.Recover();
    }
}

As a best practice, you'll typically want to define your error boundaries at a more granular level. Armed with some knowledge about how the ErrorBoundary component works, let's do just that.

Dealing with Bad Data

It's quite common to experience issues with getting and fetching data, whether it's from a database directly or from APIs we're calling (either internal or external APIs).  This can happen for a variety of reasons: from a flaky connection, an API's breaking changes, or just inconsistent data.

In my ShirtList component, we call off to a ShirtService. Here's what I have in ShirtList.razor.cs:

using ErrorBoundaries.Data;
using Microsoft.AspNetCore.Components;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace ErrorBoundaries.Pages
{
    partial class ShirtList : ComponentBase
    {
        public List<Shirt> ShirtProductsList { get; set; } = new List<Shirt>();

        [Inject]
        public IProductService ShirtService { get; set; }

        protected override async Task OnInitializedAsync()
        {
            ShirtProductsList = await ShirtService.GetShirtList();
        }
    }
}

In my ShirtList.razor file, I'm iterating through the ShirtProductsList and displaying it. (If you're worked in ASP.NET Core before you've done this anywhere between five million and ten million times.)

<table class="table">
 <thead>
     <tr>
         <th>ID</th>
         <th>Name</th>
         <th>Color</th>
         <th>Price</th>
     </tr>
  </thead>
  <tbody>
     @foreach (var board in ShirtProductsList)
     {  
         <tr>
            <td>@board.Id</td>
            <td>@board.Name</td>
            <td>@board.Color</td>
            <td>@board.Price</td>
          </tr>   
        }
  </tbody>
</table>

Instead, I can wrap the display in an ErrorBoundary and catch the error to display a message instead. (In this example, to avoid repetition I'll show the <tbody> section for the Razor component.)

<tbody>
   @foreach (var board in ShirtProductsList)
   {
        <ErrorBoundary @key="@board">
            <ChildContent>
                <tr>
                   <td>@board.Id</td>
                   <td>@board.Name</td>
                   <td>@board.Color</td>
                   <td>@board.Price</td>
                 </tr>   
             </ChildContent>
             <ErrorContent>
                 Sorry, I can't show @board.Id because of an internal error.
             </ErrorContent>
         </ErrorBoundary>        
     }
</tbody>

In this situation, the ErrorBoundary can take a @key, which in my case is the individual Surfboard, which lives in the  board variable. The ChildContent will display the data just as before, assuming I don't get any errors. If I encounter any unhandled errors, I'll use ErrorContent to define the contents of the error message. In my case, it provides a more elegant solution than placing generic try and catch statements all over the place.

What Error Boundaries Isn't

I hope this gentle introduction to Blazor Error Boundaries has helped you understand how it all works. We should also talk about which use cases aren't a great fit for Error Boundaries.

Blazor Error Boundaries is not meant to be a global exception handling mechanism for any and all unhandled errors you encounter in your apps. You should be able to log all uncaught exceptions using the ILogger interface.

While you can mark all your components with error boundaries and ignore exceptions, you should take a more nuanced approach to your application's error handling. Error Boundaries are meant for control over your specific component's unhandled exceptions and not a quick way to manage failures throughout your entire application.  

Conclusion

While it's been nice to see the growth of Blazor over the several years, it's also great to see how the ASP.NET Core team isn't afraid to add new features by looking around at other leading front-end libraries—for example, the CSS isolation was inspired by Vue and Error Boundaries is taking cues from what React has done. Not only is Blazor built on open web technologies, but it's also not afraid to look around the community to make things better.

Do you find this useful? Will it help you manage component-specific unhandled exceptions? Let us know in the comments.

]]>
<![CDATA[ The .NET Stacks #57: 🧐 Taking a look at Blazor Error Boundaries ]]> https://www.daveabrock.com/2021/07/18/dotnet-stacks-57/ 60e8a79a139437003e98f0b5 Sun, 18 Jul 2021 18:00:00 -0500 Good morning, everybody. I hope everyone has had a good last couple of weeks, as I took last week off to celebrate Independence Day here in the United States.

Here's what we have this week:

  • The big thing: Taking a look at Blazor Error Boundaries
  • The little things: .NET Foundation elections, Minimal API improvements, pining to help with OSS?
  • Last week in the .NET world

The big thing: Taking a look at Blazor Error Boundaries

This weekend, I finished writing a piece on Blazor error boundaries. It'll be published in a week or two, so here's a preview for you.

Looking at Blazor Server apps: when an unhandled exception occurs, Blazor Server treats it as a fatal error because the circuit hangs in an undefined state, which can potentially lead to usability or security problems. As a result, your app is as good as dead, loses its state, and your users are met with an undesirable An unhandled error has occurred message, with a link to reload the page.

This begs the question: if I get an unhandled exception from a single component, why should my entire app die? As a Blazor developer, you're left spending a lot of time hacking around this by putting try and catch blocks around every single method of your app, leading to performance issues—especially when thinking about cascading parameters. Blazor developers look longingly at React's Error Boundaries and wonder when we get to have nice things.

With .NET 6 Preview 4, you can. Inspired by Error Boundaries in React, it attempts to catch recoverable errors that can't permanently corrupt state—and like the React feature, it also renders a fallback UI. This capability won't catch all possible exceptions but most common scenarios such as the various lifecycle methods (like OnInitializedAsync, OnParametersSetAsync, and OnAfterRenderAsync ,  and rendering use cases with BuildRenderTree.

The new ErrorBoundary component provides a ChildContent fragment (what to display when all goes well), an ErrorContent fragment (what to display when it doesn't), and a CurrentException property so you can capture exception details.

Most importantly, the ErrorBoundary component allows you to call a Recover method, which resets the error boundary to a "non-errored state." By default, the component handles up to 100 errors through its MaximumErrorCount property. The Recover method does three things for you: it resets the component's error count to 0, clears the CurrentException and calls StateHasChanged. The StateHasChanged call notifies components that the state has changed and typically causes your component to be rerendered.

Here's an example:

@code {
    ErrorBoundary errorBoundary;

    protected override void OnParametersSet()
    {
        errorBoundary?.Recover();
    }
}

For another example, let's say you have a component that calls off to an API. Here, I'm getting a list of surfboards. I can wrap the data fetching in an ErrorBoundary.

<tbody>
   @foreach (var board in ShirtProductsList)
   {
        <ErrorBoundary @key="@board">
            <ChildContent>
                <tr>
                   <td>@board.Id</td>
                   <td>@board.Name</td>
                   <td>@board.Color</td>
                   <td>@board.Price</td>
                 </tr>   
             </ChildContent>
             <ErrorContent>
                 Sorry, I can't show @board.Id because of an internal error.
             </ErrorContent>
         </ErrorBoundary>        
     }
</tbody>

Blazor Error Boundaries is not meant to be a global exception handling mechanism for any and all unhandled errors you encounter in your apps. You should be able to log all uncaught exceptions using the ILogger interface.

While you can mark all your components with error boundaries and ignore exceptions, you should take a more nuanced approach to your application's error handling. Error Boundaries are meant for control over your specific component's unhandled exceptions and not a quick way to manage failures throughout your entire application. What do you think?


The little things: .NET Foundation elections, Minimal API improvements, pining to help with a project?

The .NET Foundation will be holding elections next month for two upcoming Board seats. If you've made it to the Elections page, you'll see my face as I'm on the Election Committee.

The .NET Foundation serves the general .NET community by promoting .NET in general, advocating .NET OSS, promoting .NET across a wider community of developers, supporting .NET community events, and offering administrative support to member projects. This is an organization that runs separately from Microsoft.

As a Board member, you'd help to run the .NET Foundation by deciding how the money is spent, decide which projects join the foundation, and much more. If you're interested in running for a seat, let me know and I'll get you started.


We've covered Minimal APIs in .NET 6 a time or two. Prioritizing functions over classes, it's a route to simple APIs that many call ".NET Express"—I like the name because it describes getting right to the point and one of the inspirations.

The examples have mostly been a little "Hello World"-ish. ASP.NET Core architect David Fowler shows that it's coming along:

This Results class looks nice, especially if you aren't a fan of attributes for MVC controllers. I'm interested to see if it can be extended further, and the team is looking into it.

If you're looking to extend model binding by using a single request object, you'll have to wait as it won't make it into .NET 6 (but maybe .NET 7). You can bind directly from the JSON object, but manually dealing with route and query parameters might get a tad unwieldy.


If you're looking to help with some open-source, David Pine could use some help. David has a project called the Azure Cosmos DB Repository .NET SDK, which allows you to use the repository pattern to work with your Cosmos DB instances. It really helps to simplify working with CRUD behavior in Cosmos, especially if you're familiar with the IRepository<T> interface. I find it quite nice, and use it for one of my projects (and have written about it as well).

David is working on a Blazor book for O'Reilly—if you've ever written a book or know someone who has, you won't be surprised to hear he'll need some help with the project. If you're interested in helping out, let him know.


🌎 Last week in the .NET world

🔥 The Top 3

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #56: Keeping it short this week ]]> https://www.daveabrock.com/2021/07/04/dotnet-stacks-56/ 60d90ac6d835a2003ea8b035 Sun, 04 Jul 2021 09:17:00 -0500 NOTE: This is the web version of my weekly newsletter, released on June 28, 2021. To get the issues right away, subscribe at dotnetstacks.com or the bottom of this post.

Happy Monday! Again. I should really come up with a better opening line. Anyway, it's been a pretty crazy week and I haven't had time to dive deep into any topics this week. As such, I'll just be providing you with the links this week.

Also, thanks to the Independence Day holiday in the US, I'll be taking next week off and will be back with a full issue on July 12. See you then!


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #55: 🆕 Ready or not, here comes .NET 6 Preview 5 ]]> https://www.daveabrock.com/2021/06/27/dotnet-stacks-55/ 60cdea419caef9003b75e236 Sun, 27 Jun 2021 08:25:00 -0500 Welcome to another busy week! Here's what we have in store this Monday morning:

  • One big thing: .NET 6 Preview 5 has arrived
  • The little things: Visual Studio 2022 Preview 1 has arrived, dropping support for older frameworks, Markdown tables extension
  • Last week in the .NET world

One big thing: .NET 6 Preview 5 has arrived

Even though it still seems that we're still recovering from Preview 4 announced around Build, it's about that time: .NET 6 Preview 5 arrived last Thursday. While Preview 4 brought a lot of developer productivity and experience features like Minimal APIs and Blazor WebAssembly AOT, Preview 5 is a jam-packed release with a lot of work going towards unification in .NET 6.

A big part of that is SDK workloadsannounced in Preview 4— a new .NET SDK feature that enables the .NET team to add support for new application types without increasing the size of the SDK. For example, you may want to include MAUI or Blazor WebAssembly application types as you need them. You can view SDK workloads as a package manager for the .NET SDK—the emphasis on the SDK, as Microsoft's vision of one SDK is coming together.

On top of that, Preview 5 brings NuGet package validation, a lot more Roslyn analyzers, improvements to the Microsoft.Extensions APIs, a new JSON source generator, WebSocket compression, OpenTelemetry support, and much more. What about ASP.NET Core? Hot Reload is enabled by default, improvements to Blazor WebAssembly download size with runtime relinking, faster gets and sets for HTTP headers, and more.

With EF Core, Preview 5 brings the first iteration of compiled models. Much like Blazor WebAssembly ahead-of-time (AOT) compilation, the compiled models functionality addresses increasing your startup time. As Jeremy Likness notes: "If startup time for your application is important and your EF Core model contains hundreds or thousands of entities, properties, and relationships, this is one release you don’t want to ignore."

The EF team is reporting 10x performance gains using compiled models. How does this work? When EF creates a context, it creates and compiles delegates to set the table properties. This gives you the ability to query properties right away. Using lazy initialization, EF Core only is accessed when needed. You'll want to check out the blog post for an in-depth discussion of all that's involved. While a fast startup time won't get any complaints, keep in mind that global query filters, lazy loading proxies, change tracking proxies, and IModelCacheKeyFactory implementations aren't supported yet.

If mobile/desktop development is your jam, you'll also want to check out MAUI updates for Preview 5.


The little things: Visual Studio 2022 Preview 1 is here, dropping support for older frameworks, Markdown tables extension

After much fanfare about 64-bit support, Visual Studio 2022 Preview 1 is now here and available for you to try out. By all accounts, it seems to be smooth and snappy—you know, as it should be—and can be installed with earlier versions of Visual Studio. You can check out the Visual Studio 2022 roadmap to see what's coming next.

The 64 bits are getting most of the buzz, but you also may have missed support for IntelliCode completions. These updates complete code for you, sometimes even whole lines of code. Based on your context, it'll predict the next part of code to write, and gives you the option to Tab to accept the changes. Mark Wilson-Thomas mentions this is done by combining a "rich knowledge of your coding context" and also a transformer model that is trained on around half a million public open-source GitHub repositories.

Here's a quick example of me testing it out:

In another example, I wrote a function to print out the first name and last name—it suggested I concatenated the strings together. (I would have preferred string interpolation, but I digress.) Then, as I did a WriteLine call, it suggested I use the method I just wrote.


The .NET team is embarking on an effort to drop older framework versions. While dropping frameworks can be a breaking change, building for every framework version increases complexity and size. The team worked around this by harvesting—building for current frameworks but downloading a package's earlier version and harvest binaries for earlier frameworks. This allows you to update without the worry of losing a framework version, but you won't get any fixes or features from these binaries.

Starting with .NET 6 Preview 5, the .NET team will no longer harvest to ensure support for all assets. This means they are dropping support for frameworks older than .NET Framework 4.6.1, .NET Core 3.1, and .NET Standard 2.0.

As Richard Lander writes:

If you’re currently referencing an impacted package from an earlier framework, you’ll no longer be able to update the referenced package to a later version. Your choice is to either retarget your project to a later framework version or not updating the referenced package (which is generally not a huge take back because you’re already consuming a frozen binary anyways).

Here's something I learned this week: there's a Markdown Tables extension in Visual Studio Code. Who do I talk to about refunding all my time spent tweaking Markdown tables?


🌎 Last week in the .NET world

It's another busy week with new preview releases for .NET and Visual Studio.

🔥 The Top 5

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ 5 Development Best Practices That Will Help You Craft Better Tests ]]> https://www.daveabrock.com/2021/06/23/development-best-practices-better-tests/ 60a2ae294dc2e1003bc1b6a8 Wed, 23 Jun 2021 10:04:10 -0500 This post originally appeared on the Telerik Developer Blog.

Our industry's advancements in agile methodologies, DevOps, and even cloud engineering have brought a key capability to focus on: shared responsibility.

If you've been a developer for a while, you might remember throwing operation problems to an Ops team, testing to a QA team, and so on. These days, developing applications isn't just about developing applications. When developers are invested in the entire lifecycle of the application process, it results in a more invested and productive team.

This doesn't come without drawbacks, of course. While having specialists in those areas will always have value—and seeing these specialists on the development team can be tremendously helpful—developers can feel overwhelmed with all they need to take on. I often see this when it comes to automated testing. As developers, we know we need to test our code, but how far should we take it? After a certain point, isn't the crafting of automated tests the responsibility of QA?

The answer: it's everyone's responsibility and not something you should throw over the wall to your QA department (if you're lucky enough to have one). This post attempts to cut through the noise and showcase best practices that will help you craft better automated tests.

Defining Automated Testing

When I talk about automated tests, I'm referring to a process that uses tools to execute predefined tests based on predetermined events. This isn't meant to be an article focused on differentiating concepts like unit testing or integration testing—if they are scheduled or triggered based on events (like checking in code to the main branch), these are both automated tests.

As a result of having an automated testing pipeline, you can confidently ship low-risk releases and ship better products with higher quality. Of course, automated testing isn't a magic wand—but done appropriately, it can give your team confidence that you're shipping a quality product.

Decide Which Tests to Automate

It isn't practical to say that you want to automate all your test cases. In a perfect world, would you want to? Sure. This isn't a perfect world: you have lots to get done, and you can't expect to test everything. Also, many of your tests need human judgment and intervention.

How often do you need to repeat your test? Is it something that is rarely encountered and only done a few times? A manual test should be fine. However, if tests need to be run frequently, it's a good candidate for an automated test.

Here are some more scenarios to help you identify good candidates for automated tests:

  • When tested manually, it is hard to get right? Do folks spend a lot of time setting up conditions to test manually?
  • Does it cause human error?
  • Do tests need to be integrated across multiple builds?
  • Do you require large data sets for performing the same actions on your tests?
  • Does the test cover frequent, high-risk, and hot code paths in your application?

Test Early, Often, and Frequently

Your test strategy should be an early consideration and not an afterthought. If you're on a three-week sprint cycle and you're repeatedly rushing to write, fix, or update your tests on the third week, you're headed for trouble. When this happens, teams often rush to get the tests working—you'll see tests filled with inadequate test conditions and messy workarounds.

When you make automated testing an important consideration, you can run tests frequently and detect issues as they arise. In the long run, this will save a lot of time—plus, you'd much rather fix issues early than have an angry customer file a production issue because your team didn't give automated testing the attention it deserved.

Consistently Evaluate Valid and Accurate Results

To that end, your team should always be comfortable knowing your tests are consistent, accurate, and repeatable. Does the team check-in code without ensuring that previous testing is still accurate?  Saying all the tests pass is not the same as saying that the tests are accurate. When it comes to unit testing, in many automated CI/CD systems, you can integrate build checks to ensure code coverage does not consistently drop whenever new features enter the codebase.

If tests are glitchy or unstable, take the time to understand why and not repeatedly manually restart runs because sometimes it works and sometimes it doesn't. If needed, remove these tests until you can resolve these issues.

Ensure Quality Test Result Reporting

What good is an automated test suite if you don't have an easy way to detect, discover, and resolve issues? When a test fails, development teams should be alerted immediately. While the team might not know how to fix it right away, they should know where to look, thanks to having clearly named tests and error messages, robust tools, and a solid reporting infrastructure. Developers should then spend their time resolving issues and not going in circles to understand what the issue is and where it lies.

Pick The Right Tool

Automated testing can become more manageable with the right tools. So I hope you're sitting down for this: I recommend Telerik Test Studio. With Telerik Test Studio, everyone can and drive value from your testing footprint: whether it's developers, QA, or even your pointy-haired bosses.

Developers can use the Test Studio Dev Edition to automate your complex .NET applications right from Visual Studio. QA engineers can use Test Studio Standalone to facilitate developing functional UI testing. Finally, when you're done telling your managers, "I'm not slacking, I'm compiling," you can point them to the web-based Executive Dashboard for monitoring test results and building reports.

]]>
<![CDATA[ The .NET Stacks #54: 🎀 Putting a bow on .NET 6 date and time updates ]]> https://www.daveabrock.com/2021/06/20/dotnet-stacks-54/ 60c6395f9caef9003b75dff2 Sun, 20 Jun 2021 10:30:00 -0500 NOTE: This is the web version of my weekly newsletter, released on June 14, 2021. To get the issues right away, subscribe at dotnetstacks.com or the bottom of this post.

Happy Monday! Here's what's going on this week:

  • The big thing: Putting a bow on .NET 6 date and time updates
  • The little things: Have I Been Pwned project updates, VS Code remote repositories extension, App Service support on .NET 6, Azure Functions updates
  • Last week in the .NET world

The big thing: Putting a bow on .NET 6 date and time updates

In a previous issue, we briefly spent some time exploring the new date and time APIs in .NET 6. (Despite feedback of the cringy title, I stand behind DateTime might be seeing other people.) In these posts, I passed along the information from various GitHub issues. This week, it was nice to see Matt Johnson-Pint officially write about it in detail on the .NET Blog.

Here's the gist: the .NET team is rolling out new DateOnly and TimeOnly types for .NET 6.

The DateOnly type, if you can believe it, only supports a date—like a year or a month. You might want to use this when you don't need an associated time, like birthdays and anniversaries. With DateOnly Microsoft promises better type safety when you only want dates, serialization improvements, and simplicity when interacting with databases. DateOnly ships with a robust constructor but no deconstructor.

As for TimeOnly, you'd use it only to represent a time of day. Potential use cases are alarm clock times, when a business is open, or recurring appointment times. Why not TimeSpan? Here, TimeOnly is intended for a time of day and not elapsed time. Again, you'll gain type safety and more accuracy when calculating ranges.

There's a lot of chatter in the community asking: why not use Noda Time instead? From the post:

Noda Time is a great example of a high-quality, community developed .NET open source library ... However, we didn’t feel that implementing a Noda-like API in .NET itself was warranted. After careful evaluation, it was decided that it would be better to augment the existing types to fill in the gaps rather than to overhaul and replace them. After all, there are many .NET applications built using the existing DateTime, DateTimeOffset, TimeSpan, and TimeZoneInfo types. The DateOnly and TimeOnly types should feel natural to use along side them.

As an FYI, the new structs aren't supported in Entity Framework Core 6 yet, but you can check out this comment to find a relevant tracking issue. You'll want to check out the post—in addition to the new DateOnly and TimeOnly structs, the .NET team is rolling out time zone enhancements as well.

These changes aren't replacing any DateTime APIs, just augmenting them. So when it comes time for you to switch to .NET 6, do what works for you.


The little things: Have I Been Pwned project updates, VS Code remote repositories extension, Azure Functions updates, App Service support on .NET 6

This week, we'll talk about Have I Been Pwned, a new Visual Studio Code extension, and updates on running Azure Functions and Azure App Service updates on .NET 6.

Have I Been Pwned project updates

You've likely heard of Troy Hunt's Have I Been Pwned project. At the risk of oversimplifying all that Troy does, you can use it to check if your personal data has been compromised by data breaches. With the help of Cloudflare and Azure Functions, he's been able to operate at a tremendous scale with minimum cost. That site is now getting around 1 billion requests a month.

Last August, Troy announced his plans to take HIBP to open source. Recently, Troy wrote that the Pwned Passwords component is now a .NET Foundation project.  You can check out the code at the HaveIBeenPwned organization on GitHub. As a .NET developer, it's a nice way to look at the code and how it all works, especially when it comes to getting hashes and anonymizing requests.

VS Code remote repositories extension

In this space last week, we mentioned a new VS Code extension for Project Tye. Visual Studio Code is at it again with a new Remote Repositories extension. With this extension, you can browse, search, edit, and commit to any remote GitHub repository without a clone. This extension will support Azure DevOps soon as well. (GitHub is the long-term vision, yet Microsoft is consistently releasing these new capabilities to Azure DevOps. For this and other reasons, I'm pretty sure Azure DevOps will outlive me.)

Personally, looking at GitHub repos is good for my learning—both for what to do and what not to do—and there should be a middle ground between browsing github.com and cloning a big repository to my machine. This should help.

Azure Functions updates

Last week, Azure Functions PM Anthony Chu wrote about recent Azure Functions updates. (I'll be interviewing him soon here, so hit me up if you have any other questions.) We've written about some of what he mentioned, but not everything—here's the latest.

  • Visual Studio support for .NET 5.0 isolated function apps: Released in May, Visual Studio 2019 16.10 includes full support for .NET 5.0 isolated process Azure Functions apps. You can check out the tutorial.
  • You can run .NET 6 on Azure Functions: To do this, you'll need Azure Functions V4—right now, it's limited to a preview release of Azure Functions Core Tools V4. Check out the post, as there are a few caveats to keep in mind. It's early, and there's no official support yet. Microsoft has published a tutorial on writing .NET 6 apps on Azure Functions.

Azure App Service support on .NET 6

In somewhat related Azure development news, you can use .NET 6 Preview bits on Azure App Service—in either Windows or Linux workloads. You can do this through the Azure App Service Early Access Runtime feature. This feature allows you to get the latest version of SDKs for various languages. With Early Access Runtime, you can get access without waiting for a release cycle.

Microsoft is boasting that it's the first time a pre-release stack is publicly available on App Service before the GA release.


🌎 Last week in the .NET world

We've got another busy week with some great content from the developer community.

🔥 The Top 5

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Your guide to REST API versioning in ASP.NET Core ]]> https://www.daveabrock.com/2021/06/16/rest-api-versioning-aspnet-core/ 6092e0fbf4327a003ba304d0 Wed, 16 Jun 2021 08:39:00 -0500 This post was originally published on the Telerik Developer Blog.

When you ship an API, you're inviting developers to consume it based on an agreed-upon contract. So what happens if the contract changes? Let's look at a simple example.

If I call an API with a URL of https://mybandapi.com/api/bands/4, I'll get the following response:

{
  "id": 4,
  "name": "The Eagles, man"
}

Now, let's say I decide to update my API schema with a new field, YearFounded. Here's how the new response looks now:

{
  "id": 4,
  "name": "The Eagles, man",
  "YearFounded": 1971
}

With this new field, existing clients still work. It's a non-breaking change, as existing apps can happily ignore it. You should document the new field in The Long Run, for sure, but it isn't a huge deal at the end of the day.

Let's say you want the name to be instead a collection of names associated with the band, like this:

{
  "id": 4,
  "names": 
  [
    "The Eagles, man",
    "The Eagles"
  ],
  "FoundedYear": 1971
}

You introduced a breaking change. Your consumers expected the name field to be a string value, and now you're returning a collection of strings. As a result, when consumers call the API, their applications will not work as intended, and your users are not going to Take It Easy.

Whether it's a small non-breaking change or one that breaks things, your consumers need to know what to expect. To help manage your evolving APIs, you'll need an API versioning strategy.

Since ASP.NET Core 3.1, Microsoft has provided libraries to help with API versioning. It provides a simple and powerful way to add versioning semantics to your REST services and is also compliant with the Microsoft REST Guidelines. In this post, I'll show you how you can use the Microsoft.AspNetCore.Mvc.Versioning NuGet package to apply API versioning to ASP.NET Core REST web APIs.

We're going to look through various approaches to versioning your APIs and how you can enable the functionality in ASP.NET Core. Which approach should you use? As always, It Depends™. There's no dogmatic "one size fits all" approach to API versioning. Instead, use the approach that works best for your situation.

Get Started

To get started, you'll need to grab the Microsoft.AspNetCore.Mvc.Versioning NuGet package. The easiest way is through the .NET CLI. Execute the following command from your project's directory:

dotnet add package Microsoft.AspNetCore.Mvc.Versioning

With the package installed in your project, you'll need to add the service to ASP.NET Core's dependency injection container. From Startup.ConfigureServices, add the following:

public void ConfigureServices(IServiceCollection services)
{
   services.AddApiVersioning();
   // ..
}

What happens when I make a GET request on https://mybandapi.com/api/bands/4? I'm greeted with the following 400 Bad Request response:

{
    "error": {
        "code": "ApiVersionUnspecified",
        "message": "An API version is required, but was not specified.",
        "innerError": null
    }
}

By default, you need to append ?api-request=1.0 to the URL to get this working, like this:

https://mybandapi.com/api/bands/4?api-request=1.0

This isn't a great developer experience. To help this, we can introduce a default version. If consumers don't explicitly include a version in their request, we'll assume they want to use v1.0. Our library takes in an ApiVersioningOptionstype, which we can use to specify a default version. So you're telling the consumers, "If you don't opt-in to another version, you'll be using v1.0 of our API."

In Startup.ConfigureServices, update your AddApiVersioning code to this:

services.AddApiVersioning(options =>
{
    options.AssumeDefaultVersionWhenUnspecified = true;
    options.DefaultApiVersion = new ApiVersion(1, 0);
});

With this in place, your consumers should be able to call a default version from https://mybandapi.com/api/bands/4.

Introducing Multiple Versions for a Single Endpoint

Let's say we want to work with two versions of our API, 1.0 and 2.0. We can use this by decorating our controller with an ApiVersionAttribute.

[Produces("application/json")]
[Route("api/[controller]")]
[ApiVersion("1.0")]
[ApiVersion("2.0")]
public class BandsController : ControllerBase
{}

Once I do that, I can use the MapToApiVersionAttribute to let ASP.NET Core know which action methods are mapped to my API versions. In my case, I've got two methods wired up to a GET on api/bands, GetById and GetById20.

Here's how the annotations look:

[MapToApiVersion("1.0")]
[HttpGet("{id}")]
public async Task<ActionResult<Band>> GetById(int id)
{}

[MapToApiVersion("2.0")]
[HttpGet("{id}")]
public async Task<ActionResult<Band>> GetById20(int id)
{}

With this, the controller will execute the 1.0 version of the GetById with the normal URI (or /api/bands/4?api-version=1.0) and the controller executes the 2.0 version when the consumer uses https://mybandapi.com/api/bands/4?api-version=2.0.

Now that we're supporting multiple versions, it's a good idea to let your consumers know which versions your endpoints are supporting. To do this, you can use the following code to report API versions to the client.

services.AddApiVersioning(options =>
{
    options.AssumeDefaultVersionWhenUnspecified = true;
    options.DefaultApiVersion = new ApiVersion(1, 0);
    options.ReportApiVersions = true;
});

When you do this, ASP.NET Core provides an api-supported-versions response header that shows which versions an endpoint supports.

By default, the library supports versioning from query strings. I like this method. Your clients can opt-in to new versions when they're ready. And if no version is specified, consumers can rely on the default version.

Of course, this is not the only way to version an API. Aside from query strings, we'll look at other ways you can version your APIs in ASP.NET Core:

  • URI/URL path
  • Custom request headers
  • Media versioning with Accept headers

Versioning with a URI Path

If I want to version with the familiar /api/v{version number} scheme, I can do it easily. Then, from the top of my controller, I can use a RouteAttribute that matches my API version. Here's how my controller annotations look now:

[ApiVersion("1.0")]
[ApiVersion("2.0")]
[Route("api/v{version:apiVersion}/[controller]")]
public class BandsController : ControllerBase
{}

Then, when I call my API from api/v2/bands/4, I will be calling version 2 of my API. While this is a popular method and is easy to set up, it doesn't come without drawbacks. It doesn't imply a default version, so clients might feel forced to update the URIs throughout their apps whenever a change occurs.

Whether it's using a query string or the URI path, Microsoft.AspNetCore.Mvc.Versioning makes it easy to work with versioning from the URI level. Many clients might prefer to do away with URI versioning for various reasons. In these cases, you can use headers.

Custom Request Headers

If you want to leave your URIs alone, you can have consumers pass a version from a request header. For example, if consumers want to use a non-default version of my API, I can have them pass in a X-Api-Version request header value.

When moving to headers, also consider that you're making API access a little more complicated. Instead of hitting a URI, clients need to hit your endpoints programmatically or from API tooling. This might not be a huge jump for your clients, but is something to consider.

To use custom request headers, you can set the library's ApiVersionReader to a HeaderApiVersionReader, then passing in the header name. (If you want to win Trivia Night at the pub, the answer to "What is the default ApiVersionReader?" is "QueryStringApiVersionReader." )

services.AddApiVersioning(options =>
{
   options.DefaultApiVersion = new ApiVersion(2, 0);
   options.AssumeDefaultVersionWhenUnspecified = true;
   options.ReportApiVersions = true;
   options.ApiVersionReader = new HeaderApiVersionReader("X-Api-Version");
});

In Postman, I can pass in an X-Api-Version value to make sure it works. And it does.

Media Versioning with Accept Headers

When a client passes a request to you, they use an Accept header to control the format of the request it can handle. These days, the most common Accept value is the media type of application/json. We can use versioning with our media types, too.

To enable this functionality, change AddApiVersioning to the following:

services.AddApiVersioning(options =>
{
     options.DefaultApiVersion = new ApiVersion(1, 0);
     options.AssumeDefaultVersionWhenUnspecified = true;
     options.ReportApiVersions = true;
     options.ApiVersionReader = new MediaTypeApiVersionReader("v");
});

Clients can then pass along an API version with an Accept header, as follows.

Combining Multiple Approaches

When working with the Microsoft.AspNetCore.Mvc.Versioning NuGet package, you aren't forced into using a single versioning method. For example, you might allow clients to choose between passing in a query string or a request header. The ApiVersionReader supports a static Combine method that allows you to specify multiple ways to read versions.

services.AddApiVersioning(options =>
{
    options.DefaultApiVersion = new ApiVersion(1, 0);
    options.AssumeDefaultVersionWhenUnspecified = true;
    options.ReportApiVersions = true;
    options.ApiVersionReader =
    ApiVersionReader.Combine(
       new HeaderApiVersionReader("X-Api-Version"),
       new QueryStringApiVersionReader("version"));
});

With this in place, clients get v2.0 of our API by calling /api/bands/4?version=2 or specifying a X-Api-Version header value of 2 .

Learn More

This post only scratches the surface—there's so much more you can do with the Microsoft.AspNetCore.Mvc.Versioning NuGet package. For example, you can work with OData and enable a versioned API explorer, enabling you to add versioned documentation to your services. If you don't enjoy decorating your controllers and methods with attributes, you can use the controller conventions approach.

Check out the library's documentation for details, and make sure to read up on known limitations (especially if you have advanced ASP.NET Core routing considerations).

]]>
<![CDATA[ Upgrading a Blazor WebAssembly Azure Static Web App from .NET 5 to .NET 6 ]]> https://www.daveabrock.com/2021/06/15/upgrade-blazor-static-app-net-5-net-6/ 60c92ecc9caef9003b75e1b9 Tue, 15 Jun 2021 18:30:50 -0500 In the last year or so, I've been working on a Blazor WebAssembly project called Blast Off with Blazor. It's a Blazor WebAssembly app hosted on the Azure Static Web Apps service, which recently went GA. Right now, it consists of four small projects: an Azure Functions API project, a project for the Blazor WebAssembly bits, a project to share models between the two, and a test project.

I wanted to see how easily I could upgrade the Blazor project from .NET 5 to .NET 6. With .NET 6 now on Preview 4, it's evolved enough where I'm ready for an upgrade. I was hoping I could double-click the project file, change my client and test projects from net5.0 to net6.0, and be on my way. (I also know I was being very optimistic.)  

After upgrading the TFM, I was able to run the app successfully. Great. I started this app using .NET 5 and its mature API surface, so I didn't expect any issues there.  The interesting part is deploying it to Azure Static Web Apps. Azure Static Web Apps uses the Oryx build system and I wasn't too sure if it supported .NET 6 yet. It does, and it deployed to Azure Static Web Apps on the first try.

What about upgrading Azure Functions to .NET 6?

Recently, the Azure Functions team announced initial support of running Azure Functions on .NET 6. This currently requires running a preview release of Azure Functions Core Tools V4. So, I could get this working locally but the support won't come to Azure Static Web Apps until we get closer to .NET 6 GA in November 2021. I'll upgrade it then. For me, I'd rather upgrade the Blazor project so I can take advantage of things like hot reload, AOT, and JSON serialization improvements.

]]>
<![CDATA[ The .NET Stacks #53: 🚀 This issue was compiled ahead of time ]]> https://www.daveabrock.com/2021/06/13/dotnet-stacks-53/ 60bb69ca941321003e21c266 Sun, 13 Jun 2021 01:00:00 -0500 Happy Monday, all. I wanted to thank everybody for the birthday wishes as the newsletter turned 1 last week. Also, welcome to all the new subscribers!

This week wasn't as crazy as we're recovering from Build, but as always, there's plenty to discuss. Let's get started.

  • One big thing: Taking a look at Blazor WebAssembly AOT
  • The little things: NuGet improvements in Visual Studio, Tye gets a VS Code extension
  • Last week in the .NET world

One big thing: Taking a look at Blazor WebAssembly AOT

With Blazor, you've got two hosting models to consider, Blazor Server and Blazor WebAssembly.

With Blazor Server, your download size isn't a concern. You can leverage server capabilities with all the .NET-compatible APIs, and you can use thin-clients—you will, however, you need to consider the higher latency and scale if you have a lot of concurrent users. (Unless you have a lot of concurrent connections, it likely won't be an issue.)  With Blazor WebAssembly, you can leverage client capabilities and a fully functioning app once the client downloads it. If you want to embrace Jamstack principles with a SPA calling off to serverless APIs (with attractive options like Azure Static Web Apps), Blazor WebAssembly is a nice option. The download size is larger, and apps do take significantly longer to load, though.

In my experience across the community, I do see many Blazor scenarios geared towards Blazor Server. Many times, folks are also packaging an ASP.NET Core API, and the download size and loading times of WebAssembly might be holding people back. "I want to wait until we get AOT," I've heard a lot of people say.  

Last week, with .NET 6 Preview 4, Blazor ahead-of-time compilation (AOT) is finally here. With AOT, you can compile .NET code directly to WebAssembly to help boost runtime performance. Before AOT, Blazor WebAssembly apps run from a .NET IL interpreter, meaning the .NET WebAssembly code is significantly slower than a normal .NET runtime (like with Blazor Server).

I gave it a go using Steve Sanderson's Picture Fixer app featured in Build talks and the .NET 6 Preview 4 release announcement. As mentioned in the announcement, you need to do two things:

  • Install the Blazor WebAssembly AOT build tools as an optional .NET SDK workload
  • Add <RunAOTCompilation>true</RunAOTCompilation> to your project file

Then, you'll need to publish your application. On my beefy Dell XPS 15 and its 64 gigs of RAM, it took almost 15 minutes to AOT 15 assemblies and use the Emscripten toolchain to do the heavy lifting. The team notes they are working on speeding this up, but it's good to note this wait only occurs during publish time and not local development. With AOT in place, you can see dramatic changes in data-intensive tasks specifically.

As with the decision to go with Blazor Server or Blazor WebAssembly, you need to consider tradeoffs when introducing AOT with your Blazor projects. As AOT-compiled projects are typically double the size, you need to consider the value of trading off load time performance for runtime performance. You can pick and choose when you use AOT, of course, so typical use cases would take a hybrid approach and leveraging AOT specifically for data-intensive tasks.

When Steve Sanderson talked to us a while back, he said:

If we can get Blazor WebAssembly to be faster than JS in typical cases (via AoT compilation, which is very achievable) and somehow simultaneously reduce download sizes to the point of irrelevance, then it would be very much in the interests of even strongly JS-centric teams to reconsider and look at all the other benefits of C#/.NET too.

AOT is a big step in that direction. The size of the .NET runtime will never compare to small JS frameworks, but getting close to download size and performance will be a giant step towards providing a modern, fast SWA solution for folks open to the .NET ecosystem.


NuGet improvements in Visual Studio, Tye gets a VS Code extension

Last week, Christopher Gill announced that Visual Studio is introducing NuGet Package Suggestions in Visual Studio 16.10. With IntelliCode Package Suggestions, NuGet uses your project's metadata—like what packages you have installed and your project type—to suggest packages you might need. Package suggestions are shown in the NuGet Package Manager UI before you enter a search query.

As for the fine print, it currently only works on the project level and does not suggest packages outside of nuget.org. Also, it won't support deprecated packages or ones that are transitively installed.


Also from last week, Microsoft rolled out a Visual Studio Code extension for Tye. If you aren't familiar Project Tye is an open-source project that helps developers work with microservices and distributed applications.

We're seeing a lot of nice updates for Tye, and I wonder when it'll get out of the "it's experimental, things might break" phase. For example, the official repo on GitHub still states: "Project Tye is an open source experiment at the moment. We are using this time to try radical ideas to improve microservices developer productivity and see what works...For the duration of the experiment, consider every part of the tye experience to be volatile." It has a lot of nice use cases with Dapr, for example, and I'd love to see when Microsoft is going to put a ring on it.


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Low Ceremony, High Value: A Tour of Minimal APIs in .NET 6 ]]> https://www.daveabrock.com/2021/06/09/low-ceremony-high-value-a-tour-of-minimal-apis-in-net-6/ 60b8a80b941321003e21bec2 Wed, 09 Jun 2021 09:54:40 -0500 This post was originally published on the Telerik Blog.

When developing APIs in ASP.NET Core, you're traditionally forced into using ASP.NET Core MVC. Going against many of the core tenets of .NET Core, MVC projects give you everything and the kitchen sink. After creating a project from the MVC template and noticing all that it contains, you might be thinking: all this to get some products from a database? Unfortunately, with MVC it requires so much ceremony to build an API.

Looking at it another way: if I'm a new developer or a developer looking at .NET for the first time (or after a long break), it's a frustrating experience—not only do I have to learn how to build an API, I have to wrap my head around all I have to do in ASP.NET Core MVC. If I can build services in Node with just a few lines of code, why can't I do it in .NET?

Starting with .NET 6 Preview 4, you can. The ASP.NET team has rolled out Minimal APIs, a new, simple way to build small microservices and HTTP APIs in ASP.NET Core. Minimal APIs hook into ASP.NET Core's hosting and routing capabilities and allow you to build fully functioning APIs with just a few lines of code. This does not replace building APIs with MVC—if you are building complex APIs or prefer MVC, you can keep using it as you always have—but its a nice approach to writing no-frills APIs.

In this post, I'll give you a tour of Minimal APIs. I'll first walk you through how it will work with .NET 6 and C# 10. Then, I'll describe how to start playing with the preview bits today. Finally, we'll look at the path forward.

Write a Minimal API with Three Lines of Code

If you want to create a Minimal API, you can make a simple GET request with just three lines of code.

var app = WebApplication.Create(args);
app.MapGet("/", () => "Hello World!");
await app.RunAsync();

That's it! When I run this code, I'll get a 200 OK response with the following:  

HTTP/1.1 200 OK
Connection: close
Date: Tue, 01 Jun 2021 02:52:42 GMT
Server: Kestrel
Transfer-Encoding: chunked

Hello World!

How is this even possible? Thanks to top-level statements, a welcome C# 9 enhancement, you can execute a program without a namespace declaration, class declaration, or even a Main(string[] args method. This alone saves you nine lines of code. Even without the Main method, we can still infer arguments—the compiler takes care of this for you.

You'll also notice the absence of using statements. This is because by default, in .NET 6, ASP.NET Core will use global usings—a new way to declare your usings in a single file, avoiding the need to declare them in individual source files. I can keep my global usings in a devoted .usings file, as you'll see here:

global using System;
global using System.Net.Http;
global using System.Threading.Tasks;
global using Microsoft.AspNetCore.Builder;
global using Microsoft.Extensions.Hosting;
global using Microsoft.Extensions.DependencyInjection;

If you've worked with Razor files in ASP.NET Core, this is similar to using a _Imports.razor file that allows you to keep @using directives out of your Razor views. Of course, this will be out-of-the-box behavior but doesn't have to replace what you're doing now. Use what works best for you.

Going back to the code, after creating a WebApplication instance, ASP.NET Core uses MapGet to add an endpoint that matches any GET requests to the root of the API. Right now, I'm only returning a string. I can use lambda improvements to C# 10 to pass in a callback—common use cases might be a model or an Entity Framework context. We'll provide a few examples to show off its flexibility.

Use HttpClient with Minimal APIs

If you're writing an API, you're likely using HttpClient to consume APIs yourself. In my case, I'll use the HttpClient to call off to the Ron Swanson Quotes API to get some inspiration. Here's how I can make a async call to make this happen:

var app = WebApplication.Create(args);
app.MapGet("/quote", async () => 
    await new HttpClient().GetStringAsync("https://ron-swanson-quotes.herokuapp.com/v2/quotes"));
await app.RunAsync();

When I execute this response, I'll get a wonderful quote that I will never disagree with:

HTTP/1.1 200 OK
Connection: close
Date: Fri, 04 Jun 2021 11:27:47 GMT
Server: Kestrel
Transfer-Encoding: chunked

["Dear frozen yogurt, you are the celery of desserts. Be ice cream or be nothing. Zero stars."]

In more real-world scenarios, you'll probably call GetFromJsonAsync with a model, but that can be done just as easily. Speaking of models, let's take a look to see how that works.

Work with Models

With just an additional line of code, I can work with a Person record. Records, also a C# 9 feature, are reference types that use value-based equality and help enforce immutability. With positional parameters, you can declare a model in just a line of code. Check this out:

var app = WebApplication.Create(args);
app.MapGet("/person", () => new Person("Bill", "Gates"));
await app.RunAsync();

public record Person(string FirstName, string LastName);

In this case, the model binding is handled for us, as we get this response back:

HTTP/1.1 200 OK
Connection: close
Date: Fri, 04 Jun 2021 11:36:31 GMT
Content-Type: application/json; charset=utf-8
Server: Kestrel
Transfer-Encoding: chunked

{
  "firstName": "Bill",
  "lastName": "Gates"
}

As we get closer to the .NET 6 release, this will likely work with annotations as well, like if I wanted to make my LastName required:

public record Person(string FirstName, [Required] string LastName);

So far, we haven't passed anything to our inline lambdas. If we set a POST endpoint, we can pass in the Person and output what was passed in. (Of course, a more common ideal real-world scenario would be passing in a database context. I'll leave that as an exercise for you, as setting up a database and initializing data is outside the scope of this post.)

var app = WebApplication.Create(args);
app.MapPost("/person", (Person p) => $"We have a new person: {p.FirstName}     {p.LastName}");
await app.RunAsync();

public record Person(string FirstName, string LastName);

When I use a tool such as Fiddler (wink, wink), I'll get the following response:

HTTP/1.1 200 OK
Connection: close
Date: Fri, 04 Jun 2021 11:36:31 GMT
Content-Type: application/json; charset=utf-8
Server: Kestrel
Transfer-Encoding: chunked

We have a new person: Ron Swanson

Use middleware and dependency injection with Minimal APIs

Your production-grade APIs—no offense, Ron Swanson—will need to deal with dependencies and middleware. You can handle this all through your Program.cs file, as there is no Startup file out of the box. When you create a WebApplicationBuilder, you have access to the trusty IServiceCollection to register your services.

Here's a common example, when you want only to show exception details when developing locally.

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

if (app.Environment.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
}

// endpoints

There's nothing against you creating a Startup file yourself as you always have, but you can do it right here in Program.cs as well.

Try Out Minimal APIs Yourself

If you'd like to try out Minimal APIs yourself right now, you have two choices: live on the edge or live on the bleeding edge.

Live On The Edge: Using the Preview Bits

Starting with Preview 4, you can use that release to explore how Minimal APIs work, with a couple of caveats:

  • You can't use global usings
  • The lambdas will be casted

Both of these are resolved with C# 10, but the Preview 4 bits use C# 9 for now. If you want to use Preview 4, install the latest .NET 6 SDK—I'd also recommend installing the latest Visual Studio 2019 Preview. Here's how our first example would look. (I know, six lines of code. What a drag.)

using System;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Hosting;

var app = WebApplication.Create(args);
app.MapGet("/", (Func<string>)(() => "Hello World!"));
await app.RunAsync();

If you want to start with an app of your own, you can execute the following from your favorite terminal:

dotnet new web -o MyMinimalApi

Living on the Bleeding Edge: Use C# 10 and the latest compiler tools

If you want to live on the bleeding edge, you can use the latest compiler tools and C# 10.

First, you'll need to add a custom nuget.config to the root of your project to get the latest tools:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <!--To inherit the global NuGet package sources remove the <clear/> line below -->
    <clear />
    <add key="nuget" value="https://api.nuget.org/v3/index.json" />
    <add key="dotnet6" value="https://dnceng.pkgs.visualstudio.com/public/_packaging/dotnet6/nuget/v3/index.json" />
    <add key="dotnet-tools" value="https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-tools/nuget/v3/index.json" />
  </packageSources>
</configuration>

In your project file, add the following to use the latest compiler tools and enable the capability for the project to read your global usings from a .usings file:

<ItemGroup>
   <PackageReference Include="Microsoft.Net.Compilers.Toolset" Version="4.0.0-2.21275.18">
      <PrivateAssets>all</PrivateAssets>
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
   </PackageReference>
</ItemGroup>

<ItemGroup>
  <Compile Include=".usings" />
</ItemGroup>

Then, you can create and update a .usings file, and you are good to go! I owe a debt of gratitude to Khalid Abuhakmeh and his CsharpTenFeatures repo for assistance. Feel free to refer to that project if you have issues getting the latest tools.

What does this mean for APIs in ASP.NET Core?

If you're new to building APIs in ASP.NET Core, this is likely a welcome improvement. You can worry about building APIs and not all the overhead that comes with MVC.

If you've developed ASP.NET Core APIs for a while, like me, you may be greeting this with both excitement and skepticism. This is great, but does it fit the needs of a production-scale API? And when it does, will it be hard to move over to the robust capabilities of ASP.NET Core MVC?

With Minimal APIs, the goal is to move out core API building capabilities—the ones that only exist in MVC today—and allow them to be used outside of MVC. When extracting these components away to a new paradigm, you can rely on middleware-like performance. Then, if you need to move from inline lambdas to MVC and its classes and controllers, the ASP.NET team plans to provide a smooth migration for you. These are two different roads with a bridge between them.

If you think long-term, Minimal APIs could be the default way to build APIs in ASP.NET Core—in most cases, it's better to start off small and then grow, rather than starting with MVC and not leveraging all its capabilities. Once you need it, it'll be there.

Of course, we've only scratched the service in all you can do with Minimal APIs. I'm interested in what you've built with them. What are your thoughts? Leave a comment below.

]]>
<![CDATA[ The .NET Stacks #52: 🎂 Happy birthday to us ]]> https://www.daveabrock.com/2021/05/31/dotnet-stacks-52/ 60b23fb4941321003e21ba22 Mon, 31 May 2021 18:44:41 -0500 As if the completely ridiculous banner image didn't tip it off, it's true: today is the 1st birthday of The .NET Stacks! I'd like to thank our friend Isaac Levin—our first interview guest—for being such a good sport. If you haven't seen the wonderful "Application Development" keynote from Build, you should (and the picture will all make sense).

But most of all, I'd like to thank all of you for your support. Honestly, this little project started as a way to keep my mind busy during a pandemic lockdown and I really wasn't sure how things would go. (Looking back at the first issue ... that was very evident.) I'm thrilled it's been able to have the impact it has, and I'm grateful to all of you for that.

With all that out of the way, what are we talking about this week? In this extended issue, there's a lot here:  

  • Build 2021 recap
  • .NET 6 Preview 4 has arrived
  • Visual Studio updates
  • System.Console in .NET 7

Build 2021 recap

Last week, Microsoft rolled out Build 2021. You can check out the 330 sessions at the Build website, and there's a YouTube playlist at the Microsoft Developer YouTube channel. It's no secret that these days Build is heavy on promoting Azure services, but .NET got a lot of love last week, too.

Your mileage may vary, but my favorite sessions included the application development keynote, a .NET "Ask the Experts" session,  increasing productivity with Visual Studio, microservices with Dapr, modern app development with .NET, and a .NET 6 deep-dive session with Scott Hunter. (Hunter is Microsoft's CSO—the Chief Scott Officer. He also runs .NET.)

I want to call out a few interesting details from that session: updates on C# 10 and a new Blazor FluentUI component library that's taking shape. (There were other nice updates on .NET MAUI and Minimal APIs that we'll surely address in depth in later issues.)

C# 10 updates

In Scott Hunter's talk, Mads Torgersen and Dustin Campbell walked through some updates coming to C# 10. C# 10 looks to be focused on productivity and simplicity features. I want to show off record structs, required object initializers, auto-implemented property improvements, null parameter checking, global usings, and file-scoped namespaces.

Record structs

C# 9 brought us records, which gives you the ability to enforce immutability with the benefits of "value-like" behavior. While the C# 9 records are really just a class under the covers and accessed by reference, the "value-like" behaviors ensure that default equality checking works with your object's data (as opposed to reference equality). A good use case is with DTOs and other objects that benefit from immutability.

With all reference types, though, passing around a lot of records can create a lot of pressure on the garbage collector. If you couple that with using with expressions, copying and GC pressure can become an issue if you go crazy with records. Can we use structs with records? With C# 10, you can with the record struct syntax. It'll behave similarly, with the only key difference being that record structs aren't heap-allocated. This will also work with tuples, expressions, or any other struct types.

Let's look at some code, shall we? Let's say you have a Person record in C# 9:

record Person
{
   public string FirstName { get; init; }
   public string LastName { get; init; }
}

To use a record struct, change it to this:

record struct Person
{
   public string FirstName { get; init; }
   public string LastName { get; init; }
}

The default record declaration will still be a reference type. If you want to make that explicit, you can use the new class struct syntax. They are the same.

Required object initializers

I enjoy the flexibility of object initializers. You can use them to initialize objects however you want: you can initialize just a few properties, in whatever order you want, or none at all! Unfortunately, this flexibility can also bite you in the rear end if you aren't careful.

With C# 10, you can set fields to be required when performing object initialization, like this:

record struct Person
{
   public required string FirstName { get; init; }
   public required string LastName { get; init; }
}

Its early days on this feature, but it might also help to enforce whether types can be instantiated by positional syntax (constructors) or object initialization.

Auto-implemented property improvements

In the Build talk, Dustin and Mads talked about the cliff: let's say you want to change one little thing about an auto-implemented property. The next thing you know, you're creating a backing field, adding bodies for your getters and setters, and you're left wondering why it takes so much work to change one little thing.

With C# 10, you can refer to the auto-generated backing field without all that nonsense. You'll be able to work with a field keyword in your getters, setters, or even both.

record struct Person
{
   public string FirstName { get; init => field = value.Trim(); }
   public string LastName { get; init; }
}

This change provides better encapsulation and fewer lines of boilerplate code.

Null parameter checks

We've seen many nullability improvements over the last few C# releases, but null parameter checking can still be a manual chore—even with null reference types, you have to depend on the caller to do null checks.

With C# 10, this is taken care of with the !! keyword:

public void DoAThing(string text!!)
{
   Console.WriteLine(text);
}

Don't worry, you aren't seeing double—this doesn't mean you're super excited about the text argument. Personally, I'm not a fan of the !!—at this rate, the C# team will need to start inventing new characters—but I am a fan of removing a bunch of this boilerplate nonsense.

Global usings and file-based namespaces

Lastly, the team introduced a few enhancements to help simplify your C# codebase.

With global usings, you can use the global using keywords to signify usings should be accessible throughout every .cs file in your project.

Here's a typical example of using statements you might want to use in your global using file:

global using System;
global using System.Collections.Generic;
global using System.Linq;
global using System.Threading.Tasks;
global using static System.Console;

I wonder if we could use Roslyn analyzers to shake out unused global usings for individual files. Anyway, I think this is a feature I will originally hate, then learn to love. It's nice to see what is being used, but after a while, it's a maintenance headache. This will be nice. (Not to mention ASP.NET Core developers are familiar with a similar approach with Razor files.) In any case, in many files, you might wind up with a global usings file, then individual usings for references that aren't scattered across your projects.

Lastly, the team introduced file-scoped namespaces. It allows you to go from this:

namespace MyNamespace
{
   record Person
   {
      public string FirstName { get; init; }
      public string LastName { get; init; }
   }
}

To this:

namespace MyNamespace;

record Person
{
   public string FirstName { get; init; }
   public string LastName { get; init; }
}

Of course, you could use top-level statements to remove namespaces completely—however, there are plenty of reasons why you don't want to abstract away your namespace declarations. In those cases, it's a nice, clean approach.

New Blazor component library

So here's something interesting that isn't getting a lot of attention: Microsoft is working on a component library for Blazor. Technically, these are wrappers around Microsoft's existing FluentUI Web Components and are built on FAST. You can fork the repository and browse to the examples app for a test page:

This is early, but I'd recommend taking a look—while it comes with the Blazor name, these are technically Razor components. This means you use them in other ASP.NET Core web apps, such as Razor Pages and MVC.

Microsoft customers have been asking for an in-house, free component library for a while—this will fill the need. While Microsoft will eventually introduce this as yet another tool at your disposal, they'll need to be careful here: the .NET community has a rich community of open-source and third-party component libraries (both free and paid), and they'll need to avoid the perception they're trying to replace these options. (To be abundantly clear, they definitely are not.)


.NET 6 Preview 4 has arrived

Just minutes into Build, Microsoft announced the official release of .NET 6 Preview 4. We've teased a few features in the last month or so, but it's nice to see the official news. Richard Lander wrote up the blog post. As a reminder, .NET 6 will be an LTS release.

.NET Hot Reload is a big .NET 6 feature (as is improving the developer inner loop in general). I've written about how you can use it by running dotnet watch with ASP.NET Core web apps (that is, Blazor, Razor Pages, and MVC). With Preview 4, you can also use it with other project types like WPF, Windows Forms, WinUI, console apps and "other frameworks that are running on top of the CoreCLR runtime." It's now integrated with the Visual Studio debugger as well—to do this, you need to download VS 2019 16.11 Preview 1.

In Lander's post, we also see much of what we've discussed previously: check out the official details on System.Text.Json improvements, LINQ enhancements, FileStream performance improvements on Windows, and new DateOnly and TimeOnly structs.

What about ASP.NET Core? ASP.NET Core is bringing it in this release—there's Minimal APIs, async streaming, HTTP logging middleware, improved SPA templates, Blazor error boundaries, and ... drum roll ... Blazor WebAssembly ahead-of-time (AOT) compilation! You can also start building .NET MAUI client-side apps with Blazor. Speaking of MAUI, there's a separate post outlining its Preview 4 updates. If you're using Entity Framework, make sure to check out that team's Preview 4 post to see all the wonderful perf improvements.

Preview 4 is a big one. Even with a little under six months to go, we'll only have a few previews to go until the focus turns to bug fixes. .NET 6 is coming along nicely.


Visual Studio updates

Last week, Microsoft also released Visual Studio 2019 v16.10 and v16.11 Preview 1.

With 16.10, we're seeing some more Git workflow improvements. The initial improvements to Git workflow in Visual Studio 2019 were a little rough, if we're being honest. It's nice to see the Visual Studio team listening to customer feedback and making it better. You can also now remove unused references—a long-adored ReSharper feature. In other news, there's improvements to Docker container tooling, IntelliSense completion improvements, Test Explorer improvements, and more. If F# is your jam, Phillip Carter announced some tooling updates for 16.10.

Also, if you're developing Azure Functions with the isolated worker in .NET 5, Azure Functions PM Anthony Chu has an update for you:

With 16.11 Preview 1, the big news is supporting hot reload in Visual Studio. We're also seeing .NET MAUI support.

On the topic of IDEs,  JetBrains released its roadmaps for ReSharper 2021.2 and Rider 2021.2.


Rethinking System.Console in .NET 7

With .NET 7—yes, .NET 7!—Microsoft is taking a look at redesigning System.Console.

As Adam Sitnik describes it, the initial design was driven by Windows OS capabilities and APIs. With .NET going cross-platform, it introduced a number of issues since there wasn't a good way to map Windows concepts to Unix. You're encouraged to follow the discussion and provide feedback.

We've seen a lot of community innovation in this space. For example, Patrik Svensson's Spectre.Console library shows us that the developer console experience can be productive and beautiful. This isn't lost and I'm interested to see how this work evolves.  


🌎 Last week in the .NET world

Welcome to Build week, where announcements are everywhere.

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #51: 👷‍♂️ The excitement is Build-ing ]]> https://www.daveabrock.com/2021/05/29/dotnet-stacks-51/ 60aa7cc34dc2e1003bc1b6e7 Sat, 29 May 2021 07:25:00 -0500 NOTE: This is the web version of my weekly newsletter, released on May 24, 2021. To get the issues right away, subscribe at dotnetstacks.com or the bottom of this post.

Happy Monday! I hope you're all doing well.

Here's what's going on this week:

  • The big thing: Previewing the big week ahead
  • The little things: SecureString meeting its demise, the .NET Coding Pack, web dev news
  • Last week in the .NET world

The big thing: Previewing the big week ahead

This week will be a big one: we've got Microsoft Build—probably the last virtual one—which kicks off on Tuesday. In what is surely not coincidental timing, .NET 6 Preview 4 should also be released. (And if that isn't enough, The .NET Stacks turns 1 next Monday. Please, no gifts.)

While Build doesn't carry the same developer excitement as it has in the past, in my opinion—the frenetic pace of .NET keeps us busy throughout the year, and, to me, Build has shifted toward a marketing event. Still, it'll be nice to watch some sessions and see where things are going. You can check out the sessions on the Build website.  I'll be keeping an eye on a .NET 6 deep dive, microservices with Dapr, and an Ask the Experts panel with many Microsoft .NET folks.

Next week, we'll also see the release of .NET 6 Preview 4 (finally!). While we'll pore over some of it next week when its formally communicated, the "what's new" GitHub issue really filled up this week and has some exciting updates.

New LINQ APIs

.NET 6 Preview 4 will include quite a few new LINQ APIs.

Index and Range updates

LINQ will now see Enumerable support for Index and Range parameters. The Enumerable.ElementAt method will accept indices from the end of the enumerable, like this:

Enumerable.Range(1, 10).ElementAt(^4); // returns 7

Also, an Enumerable.Take overload will accept Range parameters, which allows you to slice enumerable sequences easily.

MaxBy and MinBy

The new MaxBy and MinBy methods allow you to use a key selector to find a maximum or minimum method, like this (the example here is taken from the issue):

var people = new (string Name, int Age)[] { ("Tom", 20), ("Dick", 30), ("Harry", 40) };
people.MaxBy(person => person.Age); // ("Harry", 40)

Chunk

A new Chunk method allows you to chunk elements into a fixed size, like this (again, taken from the issue):

IEnumerable<int[]> chunks = Enumerable.Range(0, 10).Chunk(size: 3); // { {0,1,2}, {3,4,5}, {6,7,8}, {9} }

New DateOnly and TimeOnly structs

We talked about this in a past issue, but we'll see some new DateOnly and TimeOnly structs that add to DateTime support and do not deprecate what already exists. They'll be in the System namespace, as the others are. For use cases, think of DateOnly for business days and birthdays and TimeOnly for things like recurring meetings and weekly business hours.

Writing DOMs with System.Text.Json

So, this is fun: Preview 4 will bring us the ability to use a writable DOM feature with System.Text.Json. There's quite a few use cases here. A big one is when you want to modify a subset of a large tree efficiently. For example, you'll be able to navigate to a subsection of a large JSON tree and perform operations from that subsection.

There's a lot more with Preview 4. Check out the GitHub issue for the full details. We'll cover more next week.


The little things: SecureString meeting its demise, the .NET Coding Pack, general web dev news

The SecureString API seems great, it really does. It means well. It allows you to flag text as confidential and provide an extra layer of security. The main driver is to avoid using secrets as plain text in a process's memory. However, this doesn't translate to the OS, even on Windows. Except for .NET Framework, array contents are passed around unencrypted. It does have a shorter lifetime, so there's that—but it isn't that secure. It's easy to screw up and hard to get right.

The .NET team has been trying to phase out SecureString for awhile in favor of a more flexible ShroudedBuffer<T> type. This issue comment has all the juicy details of the latest updates.


This week, Scott Hanselman wrote about the .NET Coding Pack for Visual Studio Code. The pack includes an installation of VS Code, the .NET SDK (and adding it to the PATH), and a .NET extension. With the .NET Coding Pack, beginners will be able to work with .NET Interactive notebooks to quickly get started.


While we mostly talk about .NET around here, I think it's important to reach outside our bubble and keep up with web trends. I came across two interesting developments last week.

Google has decided to no longer give Accelerated Mobile Pages (AMP) preferential treatment in its search results (and they are even removing the icon from the results page). Whatever the reason—its controversy, lack of adoption, or Google's anti-trust pressure—it's a welcome step for an independent web. (Now, if only they'd bring back Google Reader.)

In other news, StackBlitz—in cooperation with Google Chrome and Vercel—has launched WebContainers, a way to run Node.js natively in your browser.  Simply put, it's providing an online IDE. Thanks to the strides WebAssembly has made in the past few years, it's paved a way for a WASM operating system.

Under the covers, it includes a virtualized network stack that maps to the browser's ServiceWorker API, which enables offline support. It provides a leg up over something like Codespaces or various REPL solutions, which typically need a server. I take exception with StackBlitz saying those solutions "provide a worse experience than your local machine in nearly every way" ... but if you do any JavaScript work, this is an exciting development (especially when dealing with JS's notoriously cumbersome tooling and setup demands).


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #50: 🆕 What's new with C# 10? ]]> https://www.daveabrock.com/2021/05/23/dotnet-stacks-50/ 609fc71f4dc2e1003bc1b635 Sun, 23 May 2021 10:45:00 -0500 NOTE: This is the web version of my weekly newsletter, released on May 17, 2021. To get the issues right away, subscribe at dotnetstacks.com or the bottom of this post.

Happy Monday! I hope you have a good week. Here's what we have this week:

  • The big thing: Checking in on C# 10
  • The little things: Azure Static Web Apps goes GA, JetBrains .NET days, .NET 6 FileStream improvements, JSON schema validation
  • Last week in the .NET world

The big thing: Checking in on C# 10

Last week, Ken Bonny wrote a post highlighting some new C# 10 features. Of course, it's still a little early, and not all of these changes might make it in. But if language designer Mads Torgersen talks about them publicly, there's a good probability they will.

Ever since Microsoft rolled out records with C# 9, the community has been asking for record struct types. With the C# 9 release of records, the record type is a reference type that enforces value-like behaviors. With C# 10, the team is rolling out a record struct variant that makes the underlying type a value type. So, with a record struct, values will be copied over instead of by reference. Also, on the topic of records, you'll be able to add operators to them.

Apart from this, Ken recaps what else is (probably) coming:

  • Flagging properties of classes, structs, records, or record structs as required
  • A new field keyword that can eliminate backing fields
  • Enabling a single file to enable namespace imports
  • Improvements with lambda attributes and lambda signature inference

There's a lot more with Ken's post, with some good code examples. Make sure to check it out to see what's coming with C# 10.


The little things: Azure Static Web Apps goes GA, JetBrains .NET days, .NET 6 FileStream improvements, JSON schema validation

This week, Azure Static Web Apps became generally available (and to answer everyone's #1 question, yes, apex domains are supported now). I've written about it exhaustively with my Blast Off with Blazor blog series. The gist is this: you integrate your front-end (with various JavaScript frameworks or the Blazor component library) with a backend powered by Azure Functions.

This week, I recapped my favorite things about Azure Static Web Apps—some I knew previously and some I learned about this week. I'm most impressed with the local development story. When using an Azure Static Web Apps CLI, you can leverage a static site server, a proxy to your API endpoints, and a mock authentication and authorization server.

Azure Static Web Apps is now shipping with a Standard tier. For 9 USD/month, you can take advantage of a 99.95% SLA and a "bring your own Functions" model. I find the latter to be most valuable, especially when you want to leverage other Azure Functions triggers (by default, you can only use HTTP triggers with Azure Static Web Apps).


JetBrains hosted their .NET Days conference this week (online, of course). JetBrains always hosts first-class community events, and this was no different. You can check out all the sessions on YouTube.

On Day 1, folks talked about C# source generators, debugging .NET apps, writing high-performance code, Azure CosmosDB and React, gRPC in .NET, GraphQL and Blazor, and using .NET and Dapper.

On Day 2, we learned about the 10 best C# features, F#, legacy refactoring, using null and void in .NET, async and await best practices, debugging with JetBrains Rider, and containerizing with Kubernetes.


I'm getting a little antsy about the release of .NET 6 Preview 4. Since Preview 3 was shipped five weeks ago, and Microsoft aims for a new preview release every month or so, we'll see Preview 4 very soon. It'll be jam-packed with updates for minimal APIs, AOT support, and more.

We'll also see improvements to FileStream. The .NET team is rewriting the library, focusing on Windows (as the Unix implementation, built 20 years later, was already fast). With these changes, the team notes that when used with async I/O, it now never performs blocking work.

"Dave, shut up and show us the benchmarks!" OK, fine:

Look at those allocations! In the example above, reading a 1 MB file is 2.5x faster, and writing is 5.5x faster.


When it comes to serialization with .NET, you've got a few options: mostly, the tried-and-true Newtonsoft.Json (which just surpassed a billion NuGet downloads) and the native System.Text.Json library. Unfortunately, with System.Text.Json, there's no easy way to perform JSON schema validation, and schema validation is a paid feature with Newtonsoft.Json. (There are also quite a few third-party options for schema validation as well.)

This week, Matthew Adams wrote a wonderful post on achieving schema validation with System.Text.Json. He used code generation to create .NET types that represent the structure of the schema (with strongly typed accessors for properties, array/list elements, and so on).


🌎 Last week in the .NET world

This week, we learn more about Azure Static Web Apps, GitHub Actions, modular monoliths, and more.

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #49: 🌟 Is reflection really that bad? ]]> https://www.daveabrock.com/2021/05/15/dotnet-stacks-49/ 60990a3155f561003b8198f4 Sat, 15 May 2021 05:30:00 -0500 NOTE: This is the web version of my weekly newsletter, which was released on May 10, 2021. To get the issues right away, subscribe at dotnetstacks.com or at the bottom of this post.

Happy Monday! I hope you have a good week. Here's what we have this week:

  • Is reflection still valuable?
  • IdentityServer templates shipping with .NET 6
  • Last week in the .NET world

Is reflection still valuable?

Last Monday, Marc Gravell asked: is the era of reflection-heavy C# libraries at an end? As C# source generators get more popular, developers might be wondering if it might someday replace the idea of reflection (for the unfamiliar, reflection is a way of discovering types at runtime).

Today, unless we're library authors, a lot of reflection is provided to us without us having to care:

This provides a pretty reasonable experience for the consumer; their code just works, and - sure, the library does a lot of work behind the scenes, but the library authors usually invest a decent amount of time into trying to minimize that so you aren’t paying the reflection costs every time.

As the industry has evolved, we've seen how reflection impacts modern applications. For example, we see increased demands on parallel code (async/await), AOT platforms, runtime error discovery, and more. If you're thinking of the cloud, runtime performance matters (especially if you're paying for serverless and execution times).

Marc states:

Imagine you could take your reflection-based analysis code, and inject it way earlier - in the build pipe, so when your library consumer is building their code ...  you get given the compiler’s view of the code ... and at that point you had the chance to add your own code ... and have our additional code included in the build ... This solves most of the problems we’ve discussed.

This doesn't mean reflection doesn't have its use cases, and it's so pervasive in the .NET ecosystem it isn't going anywhere anytime soon. Source generators are great for resolving performance headaches with certain reflection use cases like with GetTypes(). Should we go all-in on source generators, though? Is it worth moving code into a user's app and build processes over developers enjoying the separation?

Here's another consideration: is reflection just getting a bad rap? It can be slow for sure, but the .NET team has noted that performance is improving, and there's a prototype they're working on making 10x faster.

It'll be interesting to see how far the .NET platform pushes on source generators over improving reflection performance. I'm guessing we'll see a little in Column A and a little in Column B. Everything is a tradeoff. Still, it's promising to see source generators shake up decades of thinking reflection is the only way to inspect types dynamically.

IdentityServer templates shipping with .NET 6

Last December, IdentityServer introduced a licensing model (and we talked about it back in issue #20). You'll need to buy a license if your organization makes more than 1M USD/year (I know I have a global audience but believe me, you don't want me converting currency).

This week, Barry Dorrans announced that .NET 6 will continue to ship these templates (using the new RPL-licensed version). I personally applaud Microsoft's decision to continue to support IdentityServer, which is getting pushback from the blog's comments and a super-spicy GitHub issue. A lot of the pushback comes from folks accusing IdentityServer of a "bait and switch"—as if profiting off free open-source software for a decade wasn't enough—and that Microsoft should build it themselves.

As Microsoft has said a bunch of times, and bears repeating:

As stated we are not authentication experts, we have no expertise in writing or maintaining an authentication server. We have a team at Microsoft dedicated to that, and they produce AAD. The .NET team will not be writing production ready authentication servers, both because of the cost in that and because in doing so it's likely we'll cannibalize users from existing open source projects, something the community was very vocal in wanting us not to do when the initial discussions around IdentityServer inclusion was started.

🌎 Last week in the .NET world

We have a lot of great community posts this week.

🔥 The Top 4

📅 Community and events

🕸 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Azure Static Web Apps is production-ready: These are my 5 favorite things ]]> https://www.daveabrock.com/2021/05/13/azure-static-web-apps-favorite-things/ 6099490155f561003b81990c Wed, 12 May 2021 20:21:12 -0500 As the modern web gets insanely complicated, the industry is clamoring for the simplicity of static web apps. The Jamstack movement has impacted how we design full-stack web applications. With static web apps, you can pre-render static content to a CDN—and make them dynamic through calls to APIs and serverless functions. It's fast, performant, and a dirt-cheap proposition—in many cases, you're only responsible for compute costs.

Last May, Microsoft stepped into the already busy static web app market with its Azure Static Web Apps offering. (Yes, you could—and still can!—accomplish this through Azure Storage and Azure CDN, but it's a lot of manual setup and maintenance.)  

With Azure Static Web Apps, you integrate your front-end—with JavaScript frameworks like Angular, React, Svelte, and Vue or C#'s Blazor component library—with a backend powered by Azure Functions. You can even deploy with static site frameworks like Hugo, Jekyll, and Gatsby.

Why do you want everything under one umbrella? It offers the following benefits:

  • GitHub and Azure DevOps integration, where changes in your repo trigger build and deployments
  • Your content is globally distributed
  • Azure Static Web Apps works with APIs from a reverse-proxy model, meaning you don't have to deal with CORS configuration headaches
  • Automated staging versions are generated whenever you open a pull request

I've played with Azure Static Web Apps for the last six months with my Blast Off with Blazor blog series. Azure Static Web Apps have evolved quite a bit, and Microsoft just ditched the preview label (like, hours ago). Microsoft doesn't typically recommend public preview bits for production-scale workloads, so this is big news: it's ready, and it scales, and ships with a Standard (non-free) tier with enterprise capabilities like bringing your own Azure Functions and a 99.95% SLA.

You can read a lot of posts and docs that'll introduce you to Azure Static Web Apps. (Can I suggest mine?) In this post, though, I'll take a different perspective: here are 5 of my favorite things.

Deployment environments

Let's say you have a trigger that is based on changes to your main branch. When you create a pull request against that branch, your changes are also deployed to a temporary non-production environment.

Here's how it looked for me:

You can push new updates to this environment as long as the PR is still open. This is useful when previewing changes, sending updates out to your team for approval and review, and so on. Once changes are merged into your branch, it disappears.

NOTE! Currently, staged environments are accessible by a public URL, so to quote Elmer Fudd: "Be vewy, vewy careful."

Authentication and authorization support

Out of the "free" box (if you don't want to open your wallet for the Standard plan), Azure Static Web Apps supports authorization with Azure AD, GitHub, and Twitter. Based on the provider, you send users invites from the Azure Portal (which assigns them to specific roles), and in a staticwebapp.config.json file, they are granted access to routes.

You can streamline access through /.auth/login/{provider}, and that URL is consistent all the way to production. In addition, you can set up redirect rules to authorize a provider and even block other ones:

{
  "route": "/login",
  "redirect": "/.auth/login/github"
}
{
  "route": "/.auth/login/twitter",
  "statusCode": "404"
}

With that in place, you can reference client authentication data from a direct-access endpoint with /.auth/me.

If you're on the Standard paid plan, you can also set up custom authentication—with this approach, you can authenticate with any provider that supports OIDC.

CLI support

It's great to clickety-click in the Azure Portal (yes, that's the technical term), but that doesn't scale. You'll need to automate your deployments eventually. To do this, you can use the Azure CLI's az staticwebapp commands.

Once you have an app in your repository (either GitHub or Azure DevOps), execute az login, login with your credentials, then create your Azure Static Web Apps instance with something like this:

az staticwebapp create \
    -n my-static-web-app \
    -g my-static-web-app-rg \
    -s https://github.com/daveabrock/my-static-web-app \
    -l eastus2 \
    -b main \
    --token <LOL_nice_try>

Of course, the CLI is not a one-trick pony. If you check out the docs, you can also work with app settings, manage the environment, manage users, and more.

You can also download the official Azure Static Web Apps CLI from npm or Yarn. This will supercharge your local development experience. Speaking of local development...

Local development isn't an afterthought

The thing about the cloud is ... well, it works great in the cloud. The local experience is often an afterthought. It's hard to predict how our apps work in the cloud without targeting our specific cloud resources. You can use the Azure Static Web Apps CLI to do a lot of heavy lifting for you.

The CLI provides a static site server, a proxy to your API endpoints, a mock authentication and authorization server, and more. This chart, borrowed from the docs, illustrates it better than I ever could (so I'll just steal borrow it):

You can run swa start to start your app, and even call some other API with the --api flag—this API doesn't even need to be part of your Azure Static Web Apps resource! So, that's nice. But really, I want to focus on the star of this show: the authorization and authentication emulator, which simulates the Azure security flow.

When a user logs in locally, you define a mocked identity profile. Earlier, we talked about GitHub authorization. In this case, when browsing to /.auth/login/github, you'll see a page that lets you define a profile.

You can define client principal values here, use /.auth/me to get a client principal, and then execute the /.auth/logout endpoint to clear the principal and log out the mock user. I have absolutely no interest in mocking Azure authentication myself. This is a wonderful feature.

Apex domain support

This is a little tongue-in-cheek, but I can't help but get excited about root/APEX domain support—this gives you the ability to configure your site at blastoffwithblazor.com and not just www.blastoffwithblazor.com. Previously, you had to hack your way around Cloudflare using Burke Holland's blog post to do this, but no more! (Sorry, Burke—no, your Google Analytics is not playing games on you.)

This support was missing throughout the previews because of the impacts, but you can now do this through simple TXT record validation. After you enter your domain in the Azure Portal, configure a TXT record with your provider to prove ownership, then create an ALIAS record back in the Azure Portal.

Wrap up

In this post, I recapped my favorite things about Azure Static Web Apps: deployment environments, CLI support, the local development environment, authentication and authorization support, and apex domain support.

How has your experience been? Let me know in the comments!

 

]]>
<![CDATA[ Migrating my site from Jekyll to Ghost ]]> https://www.daveabrock.com/2021/05/10/migrating-my-site-jekyll-ghost/ 60994c0955f561003b819914 Mon, 10 May 2021 11:17:47 -0500 You may have noticed things are looking a little different around here. I've completely redesigned my website and even moved to a new platform, Ghost Pro (hosted). I was previously using Jekyll over GitHub Pages. It took a little bit of work, but I found a great theme (yes, I know, I'm famous for switching themes all the time), found out how not to break existing RSS feeds, and learned how not to break my links. I'm happy with how it turned out.

Jekyll was great. So why did I switch?

Over the last 18 months, I've gone from being a typical "I should blog more" developer to pushing out content consistently twice a week. It's been quite the experience: like working with code over time, some pieces I'm proud of, some make me cringe, but I'm happy with my progress.

I love serving the community, but it can be time-consuming (like most community members, most of my work is on my own time on top of a busy family life and a day job). I easily spend a few hours a month (if not more) on maintaining various aspects of my Jekyll site. As I'm trying to serve the developer community in different ways, I've lost the passion for maintaining my site myself. These days, I want to open a browser and write. (I did try various Jekyll CMS's, but none were a good fit.) I want to be as productive as I can and am OK with not having complete control over every part of my site. It's your textbook IaaS vs. PaaS vs SaaS argument: do you want convenience or control?

That's not to say the time spent maintaining my site was wasted. I enjoyed learning about static site generators and the Ruby ecosystem. It's just not something I want to invest time in anymore. I'm also very happy with Ghost as a CMS. From a pure writing and publishing perspective, I think it's a better and more productive experience for me.

After taking a fresh look at my site, it was a good opportunity to find a decent commenting system.

Scrapping Disqus

Some time ago, I scrapped comments altogether on my site. Comments are a great way to engage (and learn, improve, and exchange ideas with my readers) but I couldn't handle Disqus, the most popular (and free) commenting system. It's bloated, has a sketchy tracking history, and a suspect advertising engine. In terms of performance, I shouldn't have to read articles on lazy-loading Disqus comments just to make its performance acceptable. So, I scrapped Disqus until I could find a better alternative.

I've decided to use Commento, a commenting system that is focused on privacy and performance. I see a lot of folks I trust are using it, and I like what I see so far. It does come with a cost, but it's minimal (and is covered my generous GitHub sponsors, Niels Swimberghe and Thomas Ardal). I'm more than happy to support good software.

I hope you enjoy the new site. If you don't (or you do), please let me know in the comments. Thanks for reading!

]]>
<![CDATA[ The .NET Stacks #48: ⚡ Sockets. Sockets everywhere. ]]> https://www.daveabrock.com/2021/05/08/dotnet-stacks-48/ 60986f085396f6003e95819d Sat, 08 May 2021 18:26:00 -0500 NOTE: This is the web version of my weekly newsletter, which was released on May 08, 2021. To get the issues right away, subscribe at dotnetstacks.com or at the bottom of this post.

Happy Monday! Here’s what we’re talking about this week:

  • One big thing: Microsoft announces Azure Web PubSub
  • The little thing: Logging middleware coming, new try-convert release, and EF perf improvements
  • Last week in the .NET world

Microsoft announces Azure Web PubSub

Last week, Microsoft rolled out a public preview of Azure Web PubSub, a managed service for building real-time web applications using web sockets. According to Microsoft, Azure Web PubSub enables you to use WebSockets and the publish-subscribe pattern to build real-time web applications easily. This service helps you manage a lot of concurrent WebSocket connections for data-intensive apps. In these scenarios, implementing at scale is a challenge. It lowers the barrier to entry: all you need is a WebSocket client and an HTTP client. Think about simple messaging patterns with a lot of connections. What’s nice here is that you can use any addressable endpoint with Azure Web PubSub. Azure Web PubSub integrates with Azure Functions natively, with the capability to build serverless apps with WebSockets and C#, Python, Java, or JavaScript. You use anything that can connect from a web socket, even desktop apps.

Here’s what’s on our minds: how is this different than the SignalR service? While both services are internally built from similar tech, the most significant difference is that there’s no client or protocol requirement with Azure Web PubSub. You can bring your WebSocket library if you wish. And also, unlike SignalR, you’re just working with WebSockets here—you won’t see automatic reconnect scenarios, long polling, and whatnot.

What does this mean for SignalR? Nothing. As a matter of fact, according to David Fowler, here’s when you’d want to stick with SignalR:

  • You’re a .NET-specialized dev shop and have SignalR client libraries and expertise
  • You need fallbacks other than WebSockets, like long polling or server-sent events
  • The existing SignalR client platforms work for you
  • You don’t want to manage a custom protocol and need more complex patterns (and want SignalR to manage it for you)
  • You don’t want to manage the reconnect logic yourself

Anthony Chu sums it up pretty well when he says:

Personally, I would use SignalR unless you need to support clients that don’t have a supported SignalR library. You could write the code connect/reconnect/messaging code yourself but SignalR does it all for you with a nice API.

Speaking of WebSockets, WebSocket compression is coming to .NET 6 thanks to a community contribution from Ivan Zlatanov. It’s an opt-in feature, and it looks like Blazor won’t be using it for now. Security problems can arise when server messages contain payloads from users and, as a result, shouldn’t be compressed.


The little things: Logging middleware coming, new try-convert release, and EF perf improvements

As we discussed last week, .NET 6 Preview 4 will be a big release with lightweight APIs and AOT on the list (if not more). For Preview 5, ASP.NET Core will be introducing logging middleware. Logging request and response information isn’t the most fun, so middleware will do a lot of the heavy lifting for you (and you can extend it if needed). By default, the logging middleware won’t log response bodies, but that should be a configuration detail. This is a problem every ASP.NET developer has to deal with, so it’s nice to see it being generalized.


A new try-convert release was shipped last week. This release includes enhancements for VB.NET Windows Forms conversions. The new, snazzy .NET Upgrade Assistant relies on this try-convert tool, which means VB.NET WinForms support has arrived with the Upgrade Assistant. That’s all I’m going to say about that because the more you talk about VB.NET, the more someone calls you an expert.


I missed this last week, but the Entity Framework team announced the results from new TechEmpower benchmarks. It looks like EF Core 6 is 33% faster than EF Core 5, and EF Core is also almost 94% of the Dapper performance.


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #47: 🧨 Now with 32 more bits ]]> https://www.daveabrock.com/2021/05/01/dotnet-stacks-47/ 60986c7e5396f6003e95813b Sat, 01 May 2021 18:13:00 -0500 NOTE: This is the web version of my weekly newsletter, which was released on April 26, 2021. To get the issues right away, subscribe at dotnetstacks.com or at the bottom of this post.

Happy Monday! Here’s what we’re talking about this week:

  • One big thing: Visual Studio 2022 to be 64-bit
  • The little thing: API updates coming to Preview 4, memory dump tooling, dotnet monitor updates
  • Last week in the .NET world

One big thing: Visual Studio 2022 to be 64-bit

Last week, I provided a tooling update. Figuring the next major updates would occur around Build, it was a good time to get up to speed. I pushed the newsletter, started my workday, then an hour later, Amanda Silver wrote a post about Visual Studio 2022. Oops.

The biggest news: Visual Studio 2022 will be 64-bit and is no longer chained to limited memory (4 gigs or so) in the main devenv.exe process. If you look at Silver’s post, she has a video of a 64-bit Visual Studio app working easily with 1600 projects and 300k files. Finally.

With interest dating back in 2011, the Visual Studio team responded in 2016, saying that “at this time we don’t believe the returns merit the investment and resultant complexity.” Times have changed—hardware capabilities have improved, and VS is no longer the only .NET IDE option—and it’s now worth it.

For example, if you look at the issue in the Developer Community, a few comments echo what Microsoft has heard for a few years now:

This year my company(size 15k+) started to provide RAIDER to employees and plans to cancel our Enterprise VS subscriptions as VS is highly unproductive because of memory limitations of 32 bit application.
Lately I’m mainly using Rider. It does about 80% of what VS does, but it does that 80% without worrying about running out of ram.

To be clear, as David Ramel notes, Visual Studio has been available in a 64-bit edition to create 64-bit apps on 64-bit machines, but the application itself has previously been a 32-bit app. JetBrains cheekily welcomed Visual Studio to the 64-bit club, but this also means ReSharper will have more room to breathe. When the time comes, you’ll need to grab 64-bit versions of your favorite extensions as well.

Of course, 64-bit support isn’t the only thing coming to Visual Studio 2022. VS 2022 promises a better UI interface, enhanced diagnostics and debugging, productivity updates, improved code search, and more. The first preview will ship this summer, with UI refinements and accessibility improvements. In the meantime, you can check out the most requested features from the community.


The little things: API updates coming to Preview 4, async analyzers, dotnet monitor updates


A couple weeks ago, we talked about the FeatherHttp project, a lightweight, scalable way of writing APIs without the typical overhead that comes with .NET APIs today.

Here’s what I said about it:

Fowler says the repo has three key goals: to be built on the same primitives on .NET Core, to be optimized to build HTTP APIs quickly, and having the ability to take advantage of existing .NET Core middleware and frameworks. According to a GitHub comment, the solution uses 90% of ASP.NET Core and changes the Startup pattern to be more lightweight.

That functionality will start rolling out in .NET 6 Preview 4 in a few weeks. (Along with AOT support, Preview 4 promises to be a satisfying update.)

David Fowler tweeted about it and the discussion was spicy.

I’m personally excited to give this a spin. If it isn’t your jam, it doesn’t have to be—it isn’t a breaking change, and you can always use your current solution. But as APIs evolve with microservices, much of the core logic is done outside of the endpoints anyway, so you may find you don’t need the complexity and overhead with writing APIs in MVC.


Mark Downie and his team have been doing some great work with managed memory dump analyzers. If you think that’s too low-level, reconsider—when logging and local debugging fail you (or issues aren’t reproducible easily), memory dump analysis can be your friend. It isn’t always easy to work with memory dumps, though. To help folks new to dump debugging, Microsoft has developed a new .NET Diagnostics Analyzer tool to quickly identify or rule out issues.

A great use case is finding async anti-patterns, a common cause of thread starvation, which only may become prevelant when you hit high production workloads. Check out the blog post to see how it can detect some “sync-over-async” issues.


In passing, we’ve talked about the dotnet monitor tool. It allows you to access diagnostics information with a dotnet process. It’s no longer experimental: it’s now a supported tool in the .NET ecosystem. Sourabh Shirhatti wrote about that and recent updates.


Lastly, a nice tip from Khalid Abuhakmeh:


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Make microservices fun again with Dapr ]]> https://www.daveabrock.com/2021/04/29/meet-dapr/ 608c3e3df4327a003ba2fea2 Wed, 28 Apr 2021 19:00:00 -0500 If you can manage a monolithic application correctly, often it’s all you need: building, testing, and deploying is relatively straightforward. If you manage this well—embracing the concepts of a loosely-coupled monolith—you can get away from a lot of the complexity of modern distributed applications.

In theory, though, I’ve seen monoliths open for abuse. After some time, the monolith becomes complicated, changes are filled with unintended side effects, and it becomes difficult to manage. Even simple changes require full deployments of the entire application, and testing is a nightmare. In these cases, you may want to consider a microservice architecture. With anything else, you need to understand what you’re getting yourself into.

Instead, you’re signing up to manage a bunch of smaller applications with a lot to consider: how can I monitor and observe my services? How can I make them robust and resilient? How can I work with async communication, like messaging and event-driven patterns? When I do need to be synchronous, how can I handle that? How can I scale? Did I mention it has to be cloud-native through containerization and a container orchestrator like Kubernetes?

Three months after your stakeholders ask you to build a shopping cart, you’re looking at an app filled with complex dependencies, cobbled-together SDKs, and complicated configuration setups. You now spend your days futzing with YAML files—and believe me, some days it’s a word dangerously close to futzing—and you can’t help but wonder: wasn’t I supposed to be developing things?

Dapr, or Distributed Application Runtime, is here to help. Production-ready since February and sponsored by Microsoft, it manages distributed applications for you through a “building blocks” mechanism. These pluggable components manage the various layers of complexity for you:

  • State management
  • Invoking services
  • Bindings
  • Observability
  • Secrets
  • Actors

In these cases, you tell Dapr which “component to use”—like Redis or SQL Server, for instance—and Dapr will handle the rest for you through a singular SDK or API. While you may be worried about a single layer of abstraction, this provides a lot of value over other options like Istio or Orleans: one SDK to manage all your complexity.

Dapr uses a sidecar pattern. From your services, you can connect to Dapr from HTTP or gRPC. Here’s what it looks like (this graphic was stolen from the Dapr Docs):

Even more, you can enjoy the benefits from your local machine. You can configure Dapr to run the Dapr CLI.

You may think you need to Dapr-ize your entire application if you want to leverage it—this is not true. If you feel you could benefit by doing pub-sub over HTTP, use that piece and leave the rest of the application alone. This isn’t an all-or-nothing approach.

Before we proceed with one of my favorite Dapr use cases, let’s answer a common question.

Is this a new Service Mesh?

Is this a new Service Mesh? Thankfully, no.

If you’re unfamiliar, a service mesh is an infrastructure layer that can provide capabilities like load-balancing, telemetry, and service-to-service communication. In many ways, you might think abstracting these capabilities away from the services themselves is a step forward. In reality, though, I hate them so much. Adding another hop in your service to a complicated mesh that requires a talented I&O team to administer, coupled with a terrible developer experience? No, thanks. When I see these solutions, I wonder: does anyone think about the people who build applications?

While these meshes use a sidecar pattern like Dapr, it is not a dedicated network infrastructure layer but a distributed application runtime. While a service mesh enables things from the network perspective, Dapr has that and more: application-level capabilities like actors that can encapsulate logic and data, complex state management, bindings, and more.

This isn’t a binary decision, of course—you could add service meshes with Dapr to isolate networking concerns away from application concerns.

Get your feet wet with simplified pub-sub

Because Azure Service Bus does not provide a great local development experience—and not without some discussion on making it better—it isn’t uncommon to see developers use RabbitMQ locally and in development environments but Azure Service Bus in production. As with the Dapr for .NET Developers ebook, you can provide an abstraction layer, but swapping between the two is a lot of work.

Here’s an example, using the Dapr .NET SDK, for publishing an event to a concert topic:

var band = new Band
{
  Id = 1,
  Name = "The Eagles, man"
};

var daprClient = new DaprClientBuilder().Build();

await daprClient.PublishEventAsync<OrderData>("pubsub", "concert", band);

The Dapr SDK allows you to decorate your ActionResult methods with a Topic attribute. For example, here’s how a CreateBand signature would look:

[Topic("pubsub", "concert")]
[HttpPost("/bands")]
public async Task<ActionResult> CreateBand(BandModel band)

In ASP.NET Core, you’ll need to let the middleware know you’re working with Dapr in the ConfigureServices and Configure methods:

public void ConfigureServices(IServiceCollection services)
{
    // other stuff omitted
    services.AddControllers().AddDapr();
}

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    // other stuff omitted
    app.UseCloudEvents();

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapSubscribeHandler();
    });
}

If I was using RabbitMQ with Dapr, I would include something like this in my configuration:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: pubsub-rq
spec:
  type: pubsub.rabbitmq
  version: v1
  metadata:
  - name: host
    value: "amqp://localhost:5672"
  - name: durable
    value: true
  - name: deletedwhenunused
    value: true

If I wanted to swap it out with Azure Service Bus, I could update my configuration. (Each provider has various configuration levels, as you can see.)

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: <NAME>
  namespace: <NAMESPACE>
spec:
  type: pubsub.azure.servicebus
  version: v1
  metadata:
  - name: connectionString
    value: <myconnectionstring> 
  - name: timeoutInSec
    value: 75 
  - name: maxDeliveryCount
    value: 3
  - name: lockDurationInSec
    value: 30

With the Dapr pub/sub model, a nice benefit is the capability to subscribe to a topic in a declarative fashion. In this example, I can define the Dapr component, the topic to subscribe to, the API to call, and which services have pub/sub capabilities.

apiVersion: dapr.io/v1alpha1
kind: Subscription
metadata:
  name: concert-subscription
spec:
  pubsubname: pubsub
  topic: concert
  route: /bands
scopes:
- ThisService
- ThatService
- TheOtherService

While a quick example—I plan a future dedicated post of Dapr pub-sub—you can see the benefits, especially when swapping dependencies. In the Dapr for .NET Developers ebook, Microsoft says they could remove 700 lines of code using Dapr to switch between RabbitMQ and Azure Service Bus.

Conclusion

In this article, we introduced Dapr. We walked through the problems it solves, how it compares to a Service Mesh, and a simple scenario using pub-sub with RabbitMQ and Azure Service Bus. As I’m just beginning to get my feet wet, look for more posts on the topic soon!

References

]]>
<![CDATA[ The .NET Stacks #46: 📒 What's new with your favorite IDE ]]> https://www.daveabrock.com/2021/04/24/dotnet-stacks-46/ 608c3e3df4327a003ba2fea1 Fri, 23 Apr 2021 19:00:00 -0500 Happy Monday! Here’s what we’re talking about this week:

  • One big thing: Catching up on your IDEs
  • The little thing: More on the Upgrade Assistant, k8s in Microsoft Learn, bUnit news
  • Last week in the .NET world

One big thing: Catching up on your IDEs

There’s been a lot of news lately on the IDE front. Whether you’re using Visual Studio or JetBrains Rider, there have been some big releases that are worth reviewing. (I’m focusing on IDEs here—I realize you can do .NET in VS Code, but that’s more of an editor, even if it can sometimes feel like an IDE with the right extensions.)

As for VS, last Wednesday, Microsoft released Visual Studio 2019 v16.10 Preview 2. From the .NET perspective (C++ is not part of the .NET family, so I don’t mention those), it includes many IntelliSense updates. You’ll now see completions for casts, indexers, and operators, the ability to automatically insert method call arguments when writing method calls, a UI for working with EditorConfig files, and a new way to visualize inheritance.

The release also contains new features for working with containers: you can run services defined from your Docker Compose files, and there’s also a new containers tooling window. You’ll also see some nice improvements with the Test Explorer. You can finally view console logs there, navigate to links from log files, and automatically create log files.

Finally, Visual Studio has pushed out updates to the Git experience. With v16.10, Visual Studio’s status bar now features an enhanced branch picker, a repository picker, and a sync button. You can also select a commit to open an embedded view of its details and the file changes in the Git Repository window without having to navigate to other windows (an excellent way to compare commits). You can learn about the Git changes in a devoted blog post. With the admission I haven’t checked it out in a while, I’ve found the Git experience not to be very feature-rich. I might have to check it out again.

What about Rider? Earlier this month, JetBrains wrote about the Rider 2021.1 release. If you work in ASP.NET MVC, Web APIs, or Razor Pages, Rider now features scaffolding for Areas, Controllers, Razor Pages, Views, and Identity. It also generates boilerplate code for EF CRUD operations. Rider now supports ASP.NET Core route templates with inspections to check HTTP syntax errors and quick-fixes to fix route parameter issues.

They’ve also made some improvements to support the latest C# features: there’s increased support for patterns and records, and Rider has a new set of inspections and quick-fixes with records themselves. If you’re into tuples, Rider now allows you to use them with its refactoring tooling. They’ve even started looking at C# 10 and have taught Rider to work with the new “Constant interpolation strings” feature.

You can check out the What’s New in Rider page for all the details on their latest release.


The little things: More on the Upgrade Assistant, k8s in Microsoft Learn, bUnit news

Last week, I talked about hot reload in .NET 6. A few of you were kind enough to let me know that you were having issues seeing the GIFs. If you were having problems, I apologize (you can look at the web version of the newsletter and see them in action). I was looking into hot reload for a piece for the Telerik developer blog that was published last week. Check it out to see how you can enable it, how it works, how it handles errors and its limitations.


While I’m already doing some shameless plugs, I also wrote a piece about the .NET Upgrade Assistant. We’ve mentioned it in passing, but it’s a global command-line tool that guides you through migrating your .NET Framework apps to .NET 5. It doesn’t do everything for you, as many APIs like System.Web isn’t available in .NET Core—but it goes a long way in doing the painful bits. It comes with an (optional) accessibility model, which allows you to customize upgrade steps without modifying the tool itself. For example, you can explicitly map NuGet packages to their replacements, add custom template files and add custom upgrade steps. To do this, you include an ExtensionManifest.json file.

When I migrated an MVC app, I thought about using it to delete the App_Start folder and its contents automatically. It isn’t supported (but is now tracked!) as right now, you can only find files. In cases like these, though, I would hope this would be done automatically. But if you’re migrating a bunch of projects and know how you want to port over some code, you can customize it yourself. It should save you quite a bit of time.


Kubernetes has now made its way to Microsoft Learn. Microsoft shipped a new module that walks you through application and package management using Helm. You can provision Kubernetes resources from the in-browser Azure Cloud Shell.

Did you think I’d write about Kubernetes and not leave you with a funny joke?


Congrats to Egil Hansen. bUnit, his popular Blazor testing library, is out of preview and has officially hit v1.0. You can check out the release notes for the latest release.


The API proposal for C# 10 interpolated strings is now officially approved. Check it out on GitHub.


🌎 Last week in the .NET world

🔥 The Top 4

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Meet the .NET Upgrade Assistant, Your .NET 5 Moving Company ]]> https://www.daveabrock.com/2021/04/18/meet-dotnet-upgrade-assistant-your-dotnet-5-moving-company/ 608c3e3df4327a003ba2fea0 Sat, 17 Apr 2021 19:00:00 -0500 NOTE: This post originally appeared on the Progress Telerik Blog.

Moving is just the worst.

You have to rent a truck, hope it’ll hold all your things, beg friends and family to help, and feel the pain for days after the move. And during the move, you tell yourself: “Someday, I’ll hire someone to take all this stress off my hands. Imagine what I could be doing instead of all this tedious work!” The next time you move, you hire a moving company, and it’s incredible. The movers handle all the heavy lifting while you focus on breaking in your new home.

This is the promise of the .NET Upgrade Assistant, a global command-line tool that helps you migrate your .NET Framework apps to .NET 5. If .NET Framework is the sturdy home that’s showing its age and getting harder to repair and .NET 5 is a modern, efficient home built with state-of-the-art materials, the .NET Upgrade Assistant is the moving company that makes your life easier.

Let’s be clear: the .NET Upgrade Assistant is not a magic wand. You’ll likely need to do some manual work as you migrate. However, the tool can do most of your heavy lifting and allow you to focus on what’s important.

What exactly does the .NET Upgrade Assistant do? In prerelease, it’s a guided tool that leads you through your migration.

It performs the following tasks:

  • Adds analyzers that assist with upgrading
  • Determines which projects to upgrade and in what order
  • Updates your project file to the SDK format
  • Re-targets your projects to .NET 5
  • Updates NuGet package dependencies to versions compatible with .NET 5, and removes transitive dependencies present in packages.config
  • Makes C# updates to replace .NET Framework patterns with their .NET 5 equivalents
  • Where appropriate, adds common template files

The .NET Upgrade Assistant allows you to migrate from the following .NET Framework application types.

  • Windows Forms
  • WPF
  • ASP.NET MVC apps
  • Console apps
  • Class libraries

We’re going to evaluate the .NET Upgrade Assistant by migrating a legacy ASP.NET MVC app, eShopLegacyMVCSolution, which runs .NET Framework 4.7.2. I’ve borrowed this from the repository for the Microsoft e-book, Modernize existing .NET applications with Azure cloud and Windows containers, by Cesar de la Torre. My daveabrock/UpgradeAssistantDemo repository contains both the legacy solution and my upgraded solution. As a bonus, you can review my commit history to see how the code changes after each step.

Before You Start

Before you start using the Upgrade Assistant, make sure you’re familiar with Microsoft’s porting documentation and understand the migration limitations, especially when deciding to migrate ASP.NET applications. Additionally, you can use the .NET Portability Analyzer tool to understand which dependencies support .NET 5. It’s like calling the moving company first to find out what they can and cannot move and how long it might take.

Before you install the .NET Upgrade Assistant, you must ensure you install the following:

The tool also depends on the try-convert tool to convert project files to the SDK style. You must have version 0.7.212201 or later to use the Upgrade Assistant.

From a terminal, run the following to install the .NET Upgrade Assistant. (It’s a global tool, so you can run the command anywhere.)

dotnet tool install -g try-convert

If you have try-convert installed but need to upgrade to a newer version, execute the following:

dotnet tool update -g try-convert

Install the .NET Upgrade Assistant

We’re now ready to install the .NET Upgrade Assistant. To do so, execute the following from a terminal:

dotnet tool install -g upgrade-assistant

After installing the .NET Upgrade Assistant, run it by navigating to the folder where your solution exists and entering the following command.

upgrade-assistant <MySolution.sln>

Use the Upgrade Assistant to Migrate to .NET 5

To get started, I’ll run the following command from my terminal. (The default command should work, but if needed, you can pass other flags like --verbose.)

upgrade-assistant eShopDotNet5MVC.sln

The tool executes and shows me the steps it will perform. For each step in the process, I can apply the next step in the process, skip it, see details, or configure logging. Most of the time, you’ll want to select Apply next step. To save some time, you can press Enter to do this.

The .NET Upgrade Assistant starts

When the tool starts, it places a log.txt file at the root of your project.

Back Up Project

The first step is to back up the project. The .NET Upgrade Assistant asks if you want to use a custom path for your backup or a default location. Once this completes, we’re ready to convert the project file.

The .NET Upgrade Assistant backs up your project

Convert Project Files to SDK Style

.NET 5 projects use the SDK-style format. In this step, the Upgrade Assistant converts your project file to this SDK format using the try-convert tool. During this process, we see the tool is warning us that a few imports, like System.Web, might need manual intervention after migration.

The .NET Upgrade Assistant converts the project file to the SDK style

Update TFM

Next, the .NET Upgrade Assistant will update the Target Framework Moniker (TFM) to .NET 5.0. In my case, the value changes from net472 to net5.0.

The .NET Upgrade Assistant updates the TFM

Update NuGet Packages

Once the Upgrade Assistant updates the TFM, it attempts to update the project’s NuGet packages. The tool uses an analyzer to detect which references to remove and which packages to upgrade with their .NET 5 versions. Then, the tool updates the packages.

The .NET Upgrade Assistant updates the NuGet packages

Add Template Files

After the tool updates any NuGet packages, it adds any relevant template files. ASP.NET Core uses template files for configuration and startup. These typically include Program.cs, Startup.cs, appsettings.json, and appsettings.development.json.

The .NET Upgrade Assistant adds template files

Migrate Application Configuration Files

Now the tool is ready to migrate our application configuration files. The tool identifies what settings are supported, then migrates any configurable settings to my appSettings.json file. After that is complete, the tool migrates system.web.webPages.razor/pages/namespaces by updating _ViewImports.cshtml with an @addTagHelper reference to Microsoft.AspNetCore.Mvc.TagHelpers.

The .NET Upgrade Assistant migrates app config files

Update C# Source

Now, the .NET Upgrade Assistant upgrades C# code references to their .NET Core counterparts. You’ll see several steps listed in the terminal—not all apply. In those cases, they’ll be skipped over and flagged as [Complete].

In my case, the step first removes any using statements that reference .NET Framework namespaces, like System.Web. Then, it ensures my ActionResult calls come from the Microsoft.AspNetCore.Mvc namespace. Finally, the Upgrade Assistant  ensures that I don’t use HttpContext.Current, which ASP.NET Core doesn’t support.

The .NET Upgrade Assistant updates the C# source

The final step is to evaluate the next project. Since our solution only has one project, the tool exits.

Manually Fix Remaining Issues

As you head back to your project, you’ll see build errors. This is normal. This tool will automate a lot of the migration for you, but you’ll still need to tidy some things up. A majority of these issues involve how ASP.NET Core handles startup, configuration, and bundling.

  • The Global.asax and Global.asax.cs files are no longer needed in ASP.NET Core, as the global application event model is replaced with ASP.NET Core’s dependency injection model, in Startup.cs.
  • You won’t need the App_Start folder or any of the files in it (BundleConfig.cs, FilterConfig.cs, and RouteConfig.cs). Go ahead and delete it. Feels good, doesn’t it?
  • After you do this, most of your remaining errors are related to the bundling of your static assets. ASP.NET Core works with several bundling solutions. Read the bundling documentation and choose what works best for your project.
  • Finally, fix any issues that remain. Your mileage may vary, but my changes were minimal. For example, in my _Layout.cshtml file, I had to inject an IHttpContextAccessor to access the HttpContext.Session and I also needed to clean up some ActionResult responses.

Extending the .NET Upgrade Assistant

While the Upgrade Assistant fulfills most of your use cases, it has an optional accessibility model that allows you to customize upgrade steps without modifying the tool yourself. For example, you can explicitly map NuGet packages to their replacements, add custom template files, and add custom upgrade steps. To get started, you’ll include an ExtensionManifest.json file that defines where the tool finds the different extension items. You need a manifest, but all of the following elements are optional, so you can define only what you need.

{
  "ExtensionName": "My awesome extension",

  "PackageUpdater": {
    "PackageMapPath": "PackageMaps"
  },
  "TemplateInserter": {
    "TemplatePath": "Templates"
  },

  "ExtensionServiceProviders": [
    "MyAwesomeExtension"
  ]
}

When you run the Upgrade Assistant, you can pass an -e argument to pass the location of the manifest file, or define an UpgradeAssistantExtensionPaths environment variable. Check out the documentation for details.

Do you remember when we manually deleted the Global.asax and the contents of the App_Start folder? If we’re upgrading many projects, consider adding a custom upgrade step to delete these.

Learn More

As I mentioned, the .NET Upgrade Assistant is a promising and powerful tool—however, it’s in prerelease mode and is quickly evolving. Make sure to check out the tool’s GitHub repository to get up to speed and report any issues or suggestions. You can also review the tool’s roadmap.

Stay tuned for improvements.  For example, .NET 5 is a “Current” release—this means it’s supported for 3 months after .NET 6 officially ships (the support ends in February 2022). .NET 6, to be released this November, is a Long Term Support (LTS) release—meaning it’s supported for a minimum of 3 years.  As a result, you’ll soon see the ability to choose which release to target.

Conclusion

In this article, we toured the new .NET Upgrade Assistant and showed how to speed up your migration to .NET 5. Have you tried it? Leave a comment and let me know what you think.

]]>
<![CDATA[ The .NET Stacks #45: 🔥 At last, hot reload is (initially) here ]]> https://www.daveabrock.com/2021/04/17/dotnet-stacks-45/ 608c3e3df4327a003ba2fe9f Fri, 16 Apr 2021 19:00:00 -0500 Happy Monday! Here’s what we’re talking about this week:

  • One big thing: .NET 6 Preview 3 is here, and so is hot reload
  • The little thing: C# updates
  • Last week in the .NET world

.NET 6 Preview 3 is here, and so is hot reload

On Thursday, Microsoft rolled out .NET 6 Preview 3. Richard Lander has the announcement covered. As he notes in the post, the release is “dedicated almost entirely to low-level performance features.” For example, we’ve got faster handling of structs as Dictionary values, faster interface checking and casting thanks to the use of pattern matching, and code generation improvements. The team resolved an issue where NuGet restore failed on Linux thanks to previous certificate issues. There were also updates to EF Core and ASP.NET Core.

Oh, and initial hot reload support is finally here.

Hot reload isn’t just for Blazor developers to enjoy—it’s built into the .NET 6 runtime. With Preview 3, you can use it by running dotnet watch in your terminal with ASP.NET Core web apps—Razor Pages, MVC, and Blazor (Server and WebAssembly). In future updates, you’ll enjoy Visual Studio support and use it with other project types like mobile, console apps, and client and mobile apps.

It’s been highly requested, and for a good reason—front-end frameworks and libraries based on interpreted languages like JS have enjoyed this for the last five years, and it’s a fantastic productivity booster. It’s easy to rag on Microsoft for taking this long—but they’ve had more significant problems during this time. Five years ago, Microsoft was preparing the roll out of the first version of .NET Core, and a component library like Blazor was just a thought, if at all. Now, the runtime is ready to address these issues (as it’s a main goal of .NET 6).

I tried out hot reload with Blazor Server this weekend (a blog post is coming). I’ll include some GIFs that show what it’s like.

Here’s basic editing of static text:

Here’s what happens when I update C# code:

In the following example, you’ll see here that it preserves state. When I change the currentCount value, the state of the component is maintained. I’ll need to refresh the page to see the new currentCount.

Here’s where I put it all together by dropping in components (with independent state) and editing some CSS for good measure.

However, not all code actions are supported. When this happens—like renaming a method—it will revert to current dotnet watch behavior by recompiling and refreshing your page with the latest bits.

If you have runtime errors, a banner displays at the top with the appropriate information. When you resolve your issues, the app will recompile and refresh.

On the subject of Blazor, Preview 3 ships with a BlazorWebView control. This allows WPF and Windows Forms developers to embed Blazor functionality into existing .NET 6 desktop apps.


The little thing: C# updates

Last week, Bill Wagner announced open-source C# standardization. In addition to the compiler work repo (in dotnet/roslyn) and the repo for C# language evolution (dotnet/csharplang), there is now a new repo (dotnet/csharpstandard) dedicated to documenting the standard for the latest C# language versions.

The new repo sits under the .NET Foundation, and as Wagner states:

Moving the standards work into the open, under the .NET Foundation, makes it easier for standardization work. Everything from language innovation and feature design through implementation and on to standardization now takes place in the open. It will be easier to ask questions among the language design team, the compiler implementers, and the standards committee. Even better, those conversations will be public … The end result will be a more accurate standard for the latest versions of C#.

If you’re having trouble distinguishing between dotnet/csharplang and dotnet/csharpstandard, you aren’t alone. Bill Wagner notes that there’s some overlap between the repos, and it’s a work in progress.

Speaking of C#, it’s nice to check out any language proposals from time-to-time, and file-scoped namespaces is making progress as a C# 10 proposal.


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Instant Feedback Is Here: Introducing Hot Reload in .NET 6 ]]> https://www.daveabrock.com/2021/04/13/instant-feedback-is-here-introducing-hot-reload-in-dotnet-6/ 608c3e3df4327a003ba2fe9e Mon, 12 Apr 2021 19:00:00 -0500 NOTE: This post originally appeared on the Progress Telerik Blog.

With .NET 6—and its release officially going live in November 2021—a big focus is improving developer inner-loop performance. The idea of inner-loop performance comes down to this: when I make a code change, how quickly can I see it reflected in my application?

With .NET Core, you can do better than the traditional save-build-run workflow with dotnet watch. The tool can watch for your source files to change, then trigger compilation. When running dotnet watch against your ASP.NET Core web apps—for MVC, Blazor, or Razor Pages—you still need to wait for recompilation and your app to reload. If you do this repeatedly throughout the day, sitting and waiting can become frustrating. Waiting 10 seconds one time doesn’t seem like a huge deal. If you do this 100 times a day, you’re killing almost 17 minutes waiting for your app to reload!

Developers in other ecosystems—especially in the front-end space—are familiar with the concept of hot reloading: you save a file, and the change appears almost instantaneously. Once you work with hot reloading, it’s tough to go back. As the .NET team tries to attract outsiders and new developers (another goal for the .NET 6 release), not having this feature can be a non-starter to outsiders, even considering all the other wonderful capabilities a mature framework like .NET has to offer. (Not to mention that the insiders have been impatiently waiting for this for quite some time, too.)

The wait is finally over! Starting with .NET 6 Preview 3, you can use hot reload with your ASP.NET Core applications—including Blazor (both Blazor Server and Blazor WebAssembly), Razor Pages, and MVC. You can see hot reloading for static assets like CSS and compiled C# code as well.

In this post, I’ll show you how it works with a Blazor Server application. Hot reload currently only works with running dotnet watch from your terminal. In future previews, you’ll see Visual Studio integration and support for client and mobile applications.

How to Use Hot Reload Today

To try out hot reload today, you’ll first need to install the latest .NET 6 preview SDK (at the time of this post, it’s SDK 6.0.100-preview.3). You’ll need to do this whether or not you have the latest Visual Studio 2019 bits installed. It’ll be a few months before the .NET 6 preview updates are in sync with Visual Studio updates.

Once you’ve installed the latest .NET 6 preview SDK, update your profile in your Properties/launchSettings.json file to one of the following.

  • Blazor Server, MVC, and Razor Pages: "hotReloadProfile": "aspnetcore"
  • Blazor WebAssembly: "hotReloadProfile": "blazorwasm"

Since I’m using Blazor Server, my profile looks like the following.

"profiles": {
    "HotReloadDotNet6": {
      "commandName": "Project",
      "dotnetRunMessages": "true",
      "launchBrowser": true,
      "applicationUrl": "https://localhost:5001;http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      },
      "hotReloadProfile": "aspnetcore"
    }
  }

Basic Usage

To start, let’s change the heading of our Index component and see how quickly the reload occurs—it seems instantaneous.

Change the heading text to see hot reload in action

It’s great that it’s fast, but that’s not very exciting. I just changed static text. What about changing the code that previously required recompilation of our application? In the Counter component, I’m going to increase the count by five every time a user clicks the button. You’ll see the change occurs just as fast.

Update C# code to see the instant feedback

What happens when we are working with user state? For example, if I am in the middle of a user interaction and change the app, will that interaction be reset? In my previous example, I clicked the Counter component’s button until the count reached a value of 20. If I change the currentCount from 0 to 10, will things reset?

The answer is no—the state is independently preserved! Watch as I continue increasing the counter. When I refresh the page, my counter starts at 10, as I’d expect.

Hot reload preserves a component's state

As we put it all together, watch how I can add multiple components with their own state and modify CSS—all with the instant feedback that hot reloading provides.

Hot reload allows me to iterate quickly

Working with Errors

You might be wondering what happens when dotnet watch encounters runtime exceptions or build errors. Will it exit and decorate my screen with a stack trace? To try this out, let me fat-finger a variable name by changing the iterator to currentCounts.

Hot reload handles errors gracefully

When this happens, you’ll see a red banner at the top of the page. It includes the specific error message and where it occurs. When I fix the error, a rebuild occurs. It isn’t instantaneous, but it works well—most times, you don’t have to shut down and restart the application.

When I resolve my error, a rebuild occurs

What Are the Limitations?

When you initially use dotnet watch with hot reload, your terminal will display the following message:

Hot reload enabled. For a list of supported edits, see https://aka.ms/dotnet/hot-reload. Press "Ctrl + R" to restart.

That link is worth checking out to see which changes are supported. With a few exceptions, most of these changes are supported.

  • Types
  • Iterators
  • Async/await expressions
  • LINQ expressions
  • Lambdas
  • Dynamic objects

As for the unsupported changes, here are a few of the major ones:

  • Renaming elements
  • Deleting namespaces, types, and members
  • Adding or modifying generics
  • Modifying interfaces
  • Modifying method signatures

When you make a change that hot reload doesn’t support, it’ll fall back to typical dotnet watch behavior. Your app will recompile and reload in your browser. For example, this occurs when I rename my method from IncrementCount to IncrementCounts.

When hot reload isn't supported, it falls back

Conclusion

In this post, I introduced hot reload functionality that is new in .NET 6 Preview 3. I discussed how to use it today, and how it handles various development scenarios. I also discussed which changes are unsupported.

While Blazor receives a lot of fanfare in ASP.NET Core, it’s important to note that this benefits the entire .NET ecosystem. For example, you can use this now with Razor Pages and MVC—and you’ll soon see support with client and mobile apps as well. Give it a shot today, and let me know what you think!

]]>
<![CDATA[ The .NET Stacks #44: 🐦 APIs that are light as a feather ]]> https://www.daveabrock.com/2021/04/10/dotnet-stacks-44/ 608c3e3df4327a003ba2fe9d Fri, 09 Apr 2021 19:00:00 -0500 Happy Monday! Here’s what we’re talking about this week:

  • One big thing: Looking at the FeatherHttp project
  • The little things: Azure Static Web apps with DevOps, new OSS badges, coding tip
  • Last week in the .NET world

One big thing: Looking at the FeatherHttp project

We’ve talked in the past about ASP.NET Core MVC APIs and their role in the .NET Core ecosystem. While MVC has received performance improvements and does what it sets out to do, it carries a lot of overhead and is often reminiscent of the “here you go, have it all” reputation of .NET Framework. It’s a robust solution that allows you to build complex APIs but comes with a lot of ceremony. With imperative frameworks like Go and Express, you can get started immediately and with little effort. No one has said the same about writing APIs in ASP.NET Core. It’s a bad look on the framework in general, especially when folks want to try out .NET for the first time.

Last year, Dapr pushed cross-platform samples for the major frameworks, and this tweet shows a common theme with MVC:

Can we have solutions that allow you to get started quickly and avoid this mess? What if you aren’t a fan of controllers? You could use external libraries like MediatR or API Endpoints, or a framework like Nancy—but it still feels like the native runtime deserves better. The ASP.NET Core team has thought about this for a while.

The route-to-code alternative is a great start, which I wrote about. It allows you to write simple JSON APIs, using an API endpoints model—with some helper methods that lend a hand.

Here’s a quick example:

app.UseEndpoints(endpoints =>
{
    endpoints.MapGet("/hello/{name:alpha}", async context =>
    {
        var name = context.Request.RouteValues["name"];
        await context.Response.WriteAsJsonAsync(new { message = $"Hello {name}!" });
    });
});

It’s great, but Microsoft will tell you it’s only for simple APIs. Their docs clearly state: Route-to-code is designed for basic JSON APIs. It doesn’t have support for many of the advanced features provided by ASP.NET Core Web API. This begs the question: will .NET ever have a modern, lightweight API solution that has a low barrier to entry and also scales? Can I have a lightweight API that starts small and allows me to add complex features as I need them?

That is the goal of FeatherHttp, a project from ASP.NET Core architect David Fowler. Triggered by Kristian’s tweet, Fowler says the repo has three key goals: to be built on the same primitives on .NET Core, to be optimized to build HTTP APIs quickly, and having the ability to take advantage of existing .NET Core middleware and frameworks. According to a GitHub comment, the solution uses 90% of ASP.NET Core and changes the Startup pattern to be more lightweight. You can also check out a tutorial that walks you through building the backend of a React app with basic CRUD APIs and some serialization.

Here’s an example:

using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;

var app = WebApplication.Create(args);

app.MapGet("/", async http =>
{
    await http.Response.WriteAsync("Hello World");
});

await app.RunAsync();

Is this a fun experiment (currently at version 0.1.82-alpha) or will it make its way into ASP.NET Core? I’m not a mind reader, my friends, but I do know two things: (1) David Fowler is the partner architect on ASP.NET Core, and (2) big objectives for .NET 6 are to appeal to new folks and to improve the developer inner-loop experience. I suspect we’ll be hearing a lot more about this. Stay tuned.


The little things: Azure Static Web apps with DevOps, new OSS badges, code complexity tip

Last week, Microsoft announced that Azure Static Web Apps supports deployment through Azure DevOps YAML pipelines. I wrote about it as well. It opens doors for many corporate customers who aren’t ready to move to GitHub yet—while they’ve made great strides, GitHub still has some work to do to match the robust enterprise capabilities of Azure DevOps.

I was able to move one of my projects over seamlessly. Unlike GitHub Actions, Azure DevOps handles PR triggers for you automatically, so my YAML is pretty clean:

trigger:
  - main

pool:
  vmImage: 'ubuntu-latest'

steps:
  - task: AzureStaticWebApp@0
    inputs:
      app_location: "BlastOff.Client"
      api_location: "BlastOff.Api"
      output_location: "wwwroot"
    env:
      azure_static_web_apps_api_token: $(deployment_token)

While it isn’t as streamlined and elegant as the GitHub experience—you need to configure your deployment token manually, and you don’t get automatic staging environments—this should help improve adoption. If you’re wondering whether to use Azure Static Web Apps with Azure DevOps or GitHub, I’ve got you covered.


The ASP.NET Core team has introduced “good first issue” and “Help wanted” GitHub badges. If you’ve ever wanted to contribute to ASP.NET Core but didn’t know where to start, this might help.


Sometimes, it seems that processing a collection of objects is half of a developer’s job. It can often be a subject of abuse, especially when it comes to nested loops and their poor performance.

Here’s an example of me iterating through a list of Blogger objects, checking if a Url exists, and adding it to a list. (I’m using records and target-typed expressions for brevity.)

using System.Collections.Generic;

var result = new List<string>();
var bloggers = new List<Blogger>
{
    new("Dave Brock", "https://daveabrock.com", true),
    new("James Clear", "https://jamesclear.com", false)
};

foreach (var blogger in bloggers)
{
    if (blogger.IsTechBlogger)
    {
        var url = blogger.Url;
        if (url is not null)
            result.Add(url);
    }
}

record Blogger(string Name, string Url, bool IsTechBlogger);

Instead, try a LINQ collection pipeline:

using System.Collections.Generic;
using System.Linq;

var urlList = new List<string>();
var bloggers = new List<Blogger>
{
    new("Dave Brock", "https://daveabrock.com", true),
    new("James Clear", "https://jamesclear.com", false)
};

urlList = bloggers.Where(b => b.IsTechBlogger)
                  .Select(u => u.Url)
                  .Where(u => u is not null).ToList();

record Blogger(string Name, string Url, bool IsTechBlogger);

🌎 Last week in the .NET world

🔥 The Top 4

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Working with the Blazor DynamicComponent ]]> https://www.daveabrock.com/2021/04/08/blazor-dynamic-component/ 608c3e3df4327a003ba2fe9c Wed, 07 Apr 2021 19:00:00 -0500 Blazor is a joy when you know what you’re rendering—this typically involves knowing your types at compile time. But what happens when you want to render your components dynamically, when you don’t know your types ahead of time?

You can do it in a variety of ways: you can iterate through components and use complex conditionals, use reflection, declare a bunch of RenderFragments, or even build your own render tree. It can get complicated when dealing with parameters and complex data graphs, and none of these solutions are any good, really.

With .NET 6 Preview 1, the ASP.NET Core team introduced a built-in Blazor component, DynamicComponent, that allows you to render a component specified by type. When you bring in the component, you specify the Type and optionally a dictionary of Parameters.

You would declare the component like this:

<DynamicComponent Type="@myType" Parameters="@myParameterDictionary" />

The DynamicComponent has a variety of applications. I find it to be valuable when working with form data. For example, you can render data based on a selected value without having to iterate through possible types.

In this post, I’ll walk through how to use the DynamicComponent when a user selects a list from a drop-down list. I’ve got the repo out on GitHub for reference.

(Also, a shout-out to Hasan Habib’s nice video on the topic, which helped me think through some scenarios.)

Our use case

For a quick use case, I’m using a silly loan application. I want to gather details about where someone lives now (to determine how much they pay and whatnot). The drop-down has five possible values, and I’ve got some custom components in my Shared directory that render based on what a user selects.

Generate drop-down

To generate my drop-down list, I’ll use my component name as the option value. Here, I can use the nameof keyword, which returns component names as constant strings.

Here’s the markup of Index.cshtml so far:

@page "/"

<h1>Loan application</h1>

<div class="col-4">
    What is your current living situation?
    <select class="form-control">
        <option value="@nameof(DefaultDropdownComponent)">Select a value</option>
        <option value="@nameof(RentComponent)">Rent</option>
        <option value="@nameof(OwnHouseComponent)">Own house</option>
        <option value="@nameof(OwnCondoComponent)">Own condo or townhouse</option>
        <option value="@nameof(DaveRoommate)">I'm Dave's roommate</option>
    </select>
</div>

Write change event

We now need to decide what to do when a drop-down value is selected. To get the drop-down value, we work with the ChangeEventArgs type to get the value of the raised event—in this case, it’s the changed drop-down selection.

Before DynamicComponent came along, this is where you’d have logic to determine which component to render. Maybe you’d have a boolean flag, then use that in Razor markup. Instead, use Type.GetType to get the specific component to render.

Type selectedType = typeof(DefaultDropdownComponent);

public void OnDropdownChange(ChangeEventArgs myArgs)
{
  selectedType = Type.GetType($"DynamicComponentDemo.Shared.{myArgs.Value}");
}

Once I’m done with that, I can bind the drop-down’s onchange event to my OnDropdownChange method. Whenever the drop-down value changes, the method will trigger and determine which type it needs.

<select @onchange="OnDropdownChange" class="form-control">
  <option value="@nameof(DefaultDropdownComponent)">Select a value</option>
  <option value="@nameof(RentComponent)">Rent</option>
  <option value="@nameof(OwnHouseComponent)">Own house</option>
  <option value="@nameof(OwnCondoComponent)">Own condo or townhouse</option>
  <option value="@nameof(DaveRoommate)">I'm Dave's roommate</option>
</select>

Finally, I can render my DynamicComponent. I’ll place it right under my drop-down list.

<DynamicComponent Type="selectedType" />

Now we can see the page change using my DynamicComponent.

Optional: Pass in parameters

If your components have parameters, you can optionally pass them into your DynamicComponent. It takes a Dictionary<string, object>. The string is the name of your parameter, and the object is its value.

As a quick example, I can define ComponentMetadata through a quick class:

class ComponentMetadata
{
  public Type ComponentType { get; set; }
  public Dictionary<string, object> ComponentParameters { get; set; }
}

Then, I can create a dictionary for my components like this (only one component has a parameter):

private Dictionary<string, ComponentMetadata> paramsDictionaries = new()
{
  {
    "DefaultDropdownComponent",
    new ComponentMetadata { ComponentType = typeof(DefaultDropdownComponent)}
  },
  {
    "RentComponent",
    new ComponentMetadata { ComponentType = typeof(RentComponent)}
  },
  {
    "OwnCondoComponent",
     new ComponentMetadata { ComponentType = typeof(OwnCondoComponent)}
  },
  {
    "OwnHouseComponent",
    new ComponentMetadata { ComponentType = typeof(OwnCondoComponent)}
  },
  {
    "DaveRoommate",
    new ComponentMetadata
    {
      ComponentType = typeof(OwnCondoComponent),
      ComponentParameters = new Dictionary<string, object>()
      {
        { "CustomText", "Ooh, no." }
      }
    }
  }
};

Then, I could have logic that filters and passes in a ComponentParameters instance to the DynamicComponent, depending on what type I’m passing in. There’s a lot of power here—you could pass in data from an API or a database as well or even a function, as long as it returns a Dictionary<string, object>.

You might be asking: Why not use the catch-all approach?

<DynamicComponent Type="@myType" MyParameter="Hello" MySecondParameter="Hello again" />.

According to Blazor architect Steve Sanderson, he says:

If we do catch-all parameters, then every explicit parameter on DynamicComponent itself - now and in the future - effectively becomes a reserved word that you can’t pass to a dynamic child. It would become a breaking change to add any new parameters to DynamicComponent as they would start shadowing child component parameters that happen to have the same name … It’s unlikely that the call site knows of some fixed set of parameter names to pass to all possible dynamic children. So it’s going to be far more common to want to pass a dictionary.

Wrap up

In this post, we walked through the new DynamicComponent, which allows you to render components when you don’t know your types at runtime. We were able to render a component based on what a user selects from a drop-down list. We also explored how to pass in parameters to the DynamicComponent as well.

References

]]>
<![CDATA[ The .NET Stacks #43: 📅 DateTime might be seeing other people ]]> https://www.daveabrock.com/2021/04/03/dotnet-stacks-43/ 608c3e3df4327a003ba2fe9b Fri, 02 Apr 2021 19:00:00 -0500 Happy Monday! I hope you have a productive week and get more traction than an excavator on the Suez Canal. (Sorry.)

  • One big thing: New APIs for working with dates and times
  • The little things: Hot reload, debugging config settings, Clean Architecture resources
  • Last week in the .NET world

One big thing: New APIs for working with dates and times

If you’re walking on the street and a person asks you the time, you can look at your watch and tell them. It’s not so simple in .NET. If you only care about the time, you need to call and parse a DateTime or DateTimeOffset and scrap any date information. This approach can be incredibly error-prone, as these structs carry time-zone-related logic that isn’t written for getting the absolute date and time. (That doesn’t even include its general weirdness, like DateTime.TimeOfDay returning a TimeSpan and DateTime.Date returning a new DateTime. Even on its best days, the DateTime struct is both overloaded and confusing.)

Over on GitHub, there’s a spicy issue (with some bikeshedding) discussing the possibility of a Date struct and a Time struct. Initially, Microsoft chose to expose the features as an external NuGet package. However, .NET developers are looking for the changes to be in the core of the .NET BCL APIs to avoid third-party library dependencies. The issue has a proposed API.

You may see these named differently as Visual Basic has used Date for three decades, and this would include a breaking change (and it might confuse folks using C# and VB). Right now, DateOnly seems to be the new name. I hope this is not a permanent name.

Here’s an interesting comment from the issue about a user’s scenario:

I’m instead interested in the Date class most of all … We have a lot of products that is valid from date X to date Y. When we used DateTimeOffset we always had to remember to start at date X 00:00:00 and end at date Y 23:59:59.999. Then we switched to NodaTime and used LocalDate from there and all a sudden we could just compare dates without bother about time. We store Date in the database and map the column to a LocalDate now instead of a DateTimeOffset (which makes no sense since Date in the DB don’t have time or offset) which makes our lives even easier.

This is all developers want from their SDKs: simplicity and predictability. The excellent NodaTime library provides these capabilities, but it isn’t coming from Microsoft—which stifles adoption. It’ll be great to have simpler APIs with working with dates or times, but it might be a while before we see it in the wild. (And for the record, DateTime isn’t going anywhere.)


The little things: Hot reload, debugging config settings, Clean Architecture resources

Say what you will about JavaScript (and believe me, I have), but it’s no mystery why it’s so widely used and adopted: the barrier of entry is low. I can open a text file, write some HTML markup (and maybe some JS), and open my web page to see the results. The instant feedback boosts productivity: in most common front-end frameworks, you can use “hot reload” functionality to see your changes immediately without having to restart the app.

That functionality is coming to ASP.NET Core (and Blazor) in .NET 6—we got a quick preview this week at Steve Sanderson’s talk at NDC Manchester. He showed off a promising prototype. Steve added and updated components (while preserving state), updated C# code, and worked well. The error handling looks nice, too—the app shows an error at the top of your page and will go away once you correct your issues and save. In some scenarios where you can’t do hot reload, your app will refresh your browser as it does now. This functionality is almost ready to show off to a larger audience—look for it in .NET 6 Preview 3, which will be released in the next few weeks.

He also showed off an ErrorBoundary component, allowing developers to have greater control over error handling in Blazor apps. As it is today, when unhandled exceptions occur, the user connection is dropped. With this new functionality, you can wrap an ErrorBoundary around specific markup, where unhandled exceptions in that section of code still allow users to access other parts of the app.


A month ago, our friend Cecil Phillip tweeted:

I never knew about this, either—it helps you discover your environmental variables and settings and where they’re getting read. (And, of course, it should be used for development purposes only so you don’t accidentally leak secrets.)


When it comes to recommending architecture solutions, I use the two words that consultants live by: It Depends®. Even so, there’s a lot to like about the Clean Architecture movement.

This week, Patrick Smacchia wrote about Jason Taylor’s popular solution template. Also, Mukesh Murugan has released Blazor Hero, a clean architecture template that includes common app scenarios under one umbrella. These are both good resources on understanding Clean Architecture.


Lastly, this is exciting:

🌎 Last week in the .NET world

These links are new. I checked.

🔥 The Top 5

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Use Azure Static Web Apps with Azure DevOps pipelines ]]> https://www.daveabrock.com/2021/04/01/static-web-apps-azure-pipelines/ 608c3e3df4327a003ba2fe9a Wed, 31 Mar 2021 19:00:00 -0500 Last year, Microsoft released Azure Static Web Apps, a great way to bundle your static app with a serverless Azure Functions backend. If you have a GitHub repository, Azure Static Web Apps has you covered. You create an instance in Azure, select a GitHub repository, and Azure creates a GitHub Actions CI/CD pipeline for you that’ll automatically trigger when you merge a pull request into your main branch. It’s still in preview, but a GA release isn’t too far off.

To borrow from a famous Henry Ford quote: you can have it from any repo you want, so long as it’s GitHub.

That has changed. Azure Static Web Apps now provides Azure DevOps support. If you have a repository in Azure DevOps, you can wire up an Azure Pipelines YAML file that builds and deploys your app to Azure Static Web Apps. While it isn’t as streamlined and elegant as the GitHub experience—you need to configure your deployment token manually, and you don’t get automatic staging environments—it sure beats the alternative for Azure DevOps customers (that is, no Azure Static Web Apps support at all).

Since last fall, I’ve been experimenting with Azure Static Web Apps for my Blast Off with Blazor blog posts and accompanying repository. In this post, I’ll deploy my repository’s code from Azure DevOps to try out the support.

Prerequisites

Before we begin, I’ll assume you have an active Azure DevOps project. If you don’t, check out the docs.

If you need to import a Git repository from your local machine, as I did, perform the following steps:

  1. Navigate to your Azure DevOps repository.
  2. Select Import.
  3. Enter the URL of your Git repository.
  4. Select Import again.

Create an Azure Static Web Apps resource

To get started, create a new Azure Static Web App resource from the Azure Portal.

  1. Navigate to the Azure Portal at portal.azure.com.
  2. From the global search box at the top, search for and select Azure Static Web Apps (Preview).
  3. Select Create.
  4. Enter details for your subscription, resource group, name, and region.  Under Deployment details, click Other. This option bypasses the automated GitHub flow and tells Azure you don’t need it.

Once you’re done, click Review + Create, then Create. When the deployment completes, navigate to your new resource. You’ll now need to grab the deployment token to use with Azure DevOps.

Access deployment token

Since you clicked Other when you created your resource, you’ll need to do some manual work to get Azure Static Web Apps working with Azure DevOps. To do so, select Manage deployment token.

From there, copy your token to a text editor so you can use it later. (Or, if you’re fancy, you can use your operating system’s clipboard history features.)

Create Azure Pipelines task

Now, we’ll need to create an Azure Pipelines task in Azure DevOps.

From your Azure DevOps repository, select Set up build.

From Configure your pipeline, select Starter pipeline.

In your pipeline, replace the existing YAML content with a new AzureStaticWebApp task. Pay special attention to the inputs section:

  • The app_location specifies the root of your application code. For me, this sits in a Blazor client directory at Blazor.Client.
  • The api_location is the location of your Azure Functions code. Pay attention here! If no API is found in the specified location, the build assumes it doesn’t exist and silently fails. (Trust me … I know this personally.)
  • The output_location includes the build output directory relative to app_location. For me, my static files reside in wwwroot.

We’ll work on the azure_static_web_app_api_token value soon.

Here’s my YAML file. (You’ll notice it’s a bit cleaner than the GitHub Action.)

trigger:
  - main

pool:
  vmImage: 'ubuntu-latest'

steps:
  - task: AzureStaticWebApp@0
    inputs:
      app_location: "BlastOff.Client"
      api_location: "BlastOff.Api"
      output_location: "wwwroot"
    env:
      azure_static_web_apps_api_token: $(deployment_token)

Before we kick off the pipeline, we need to add our Azure Static Web Apps deployment token as an Azure Pipelines variable. To do so, select Variables and create a variable called deployment_token (assuming that’s what you named the value of azure_static_web_apps_api_token). Finally, select Keep this value secret, then click OK.

Select Save, then Save and run. With a little luck, it might pass on the first try. Congratulations! You now have an Azure Static Web Apps site built from an Azure DevOps repository.

Side note: The minimal Azure Pipelines YAML file

I can’t help but notice the brevity of the Azure Pipelines YAML file. Here’s another look at the file:

trigger:
  - main

pool:
  vmImage: 'ubuntu-latest'

steps:
  - task: AzureStaticWebApp@0
    inputs:
      app_location: "BlastOff.Client"
      api_location: "BlastOff.Api"
      output_location: "wwwroot"
    env:
      azure_static_web_apps_api_token: $(deployment_token)

Compare that to the look of the GitHub Actions YAML file that Azure Static Web Apps created for me:

name: Azure Static Web Apps CI/CD

on:
  push:
    branches:
      - main
  pull_request:
    types: [opened, synchronize, reopened, closed]
    branches:
      - main

jobs:
  build_and_deploy_job:
    if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
    runs-on: ubuntu-latest
    name: Build and Deploy Job
    steps:
      - uses: actions/checkout@v2
        with:
          submodules: true
      - name: Build And Deploy
        id: builddeploy
        uses: Azure/static-web-apps-deploy@v0.0.1-preview
        with:
          azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_BLACK_DESERT_00CF9D310 }}
          repo_token: ${{ secrets.GITHUB_TOKEN }} 
          action: "upload"
          app_location: "BlastOff.Client"
          api_location: "BlastOff.Api"
          output_location: "wwwroot"

  close_pull_request_job:
    if: github.event_name == 'pull_request' && github.event.action == 'closed'
    runs-on: ubuntu-latest
    name: Close Pull Request Job
    steps:
      - name: Close Pull Request
        id: closepullrequest
        uses: Azure/static-web-apps-deploy@v0.0.1-preview
        with:
          azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_BLACK_DESERT_00CF9D310 }}
          action: "close"

While the deploy step is similar between the two, the GitHub Actions file has a lot of noise related to managing pull requests. As such, a common question might be: Do I need to add PR trigger steps in my Azure Pipelines file? No! In my testing, I successfully merged a pull request to main and kick off a successful build and deploy without updating the YAML file manually.

Should I use Azure Static Web Apps with Azure DevOps or GitHub?

With two ways to deploy your Azure Static Web Apps (with the possibility of more), you might be wondering which option to choose.

Microsoft will not be supporting competing code management/DevOps solutions long-term. Azure DevOps isn’t being phased out anytime soon, but the long-term investment will be in GitHub. If you are building new and don’t have a dependency on Azure DevOps, Azure Static Web Apps with GitHub is the way to go. However, if you are using Azure DevOps—for many, the robust enterprise capabilities of Azure DevOps make it hard to leave until GitHub matures in that space—it’s a nice solution.

Wrap up

In this post, we demonstrated how to deploy an Azure Static Web App using an Azure DevOps pipeline. We created an Azure Static Web Apps resource, retrieved a deployment token, created an Azure Pipelines task, and explored why you would want to use GitHub or Azure DevOps for your Azure Static Web Apps.

]]>
<![CDATA[ The .NET Stacks #42: 🔌 When Active Directory isn't so active ]]> https://www.daveabrock.com/2021/03/27/dotnet-stacks-42/ 608c3e3df4327a003ba2fe99 Fri, 26 Mar 2021 19:00:00 -0500 Happy Monday to you all. Here’s what we have on tap this week.

  • One big thing: When Active Directory isn’t so active
  • The little things: A bunch of odds and ends
  • Last week in the .NET world

One big thing: When Active Directory isn’t so active

Mondays are typically long days. Tell that to Microsoft, who last Monday suffered another Azure Active Directory outage that took down most apps consuming AD, including the Azure Portal, Teams, Exchange, Azure Key Vault, Azure Storage, and more. The outage lasted a few hours (2 pm until 7 pm, in these parts), but lingering effects lasted much longer. The timing was unfortunate—isn’t it always?—as they’re rolling out 99.99% availability in April to customers with Premium licenses.

What happened? Azure AD runs an automated system that removes keys no longer in use. To support a “complex cross-cloud migration,” a specific key was marked to retain for longer than usual. Due to a bug, the system ignored the flag, the key was removed, and Azure AD stopped trusting the tokens from the removed key. When you pair this with the outage from September 2020—the culprit there was a code defect—you have a right to be concerned about Azure AD if you aren’t already.

Meanwhile, updates were quicker on Twitter than on their status pages. Microsoft has owned up to this, saying: “We identified some differences in detail and timing across Azure, Microsoft 365 and Dynamics 365 which caused confusion for customers … We have a repair item to provide greater consistency and transparency across our services.”

For Microsoft’s part, the notice says they are engaged in a two-stage process to improve Azure AD, including an effort to avoid what happened last Monday. This effort includes instituting a backend Safe Deployment Process (SDP) system to prevent these types of problems. The first stage is complete, and the second stage is planned for completion later this year.

Let’s hope so. It’s hard to swallow that such a critical service has a single point of failure. While there are many reasons for and against this design, we can all agree that Microsoft needs to improve resiliency for Azure AD. Instead of the time-honored tradition of Azure executives at Build or Ignite showing off a global map of all their new regions, I think we’d much rather have a slide showing off improvements to their flagship identity service.

The little things: A bunch of odds and ends

In the ASP.NET standup this week, James Newton-King joined Jon Galloway to talk about gRPC improvements for .NET 5. It gets low-level at times, but I enjoyed it and learned a lot.

For the improvements, benchmarks show the .NET gRPC implementation just behind Rust (which isn’t a framework, so that’s saying something). Server performance is 60% faster than .NET Core 3.1, and client performance is 230% faster.

To answer your next question: since IIS and HTTP.sys now support gRPC, does Azure App Service support it too? Not yet, but keep an eye on this issue for the latest updates.


Adam Sitnik, an engineer on the .NET team and the person behind BenchmarkDotNet, has a new repository full of valuable resources for learning about .NET performance.


Steve Sanderson, the creator of Blazor (and a recent interview subject), has created an excruciatingly detailed Blazor issue in GitHub to catch and handle exceptions thrown within a particular UI subtree. This capability accomplishes an idea of “global exception handling” in Blazor.


This week, Nick Craver noted why Stack Overflow likely isn’t migrating to .NET 5. (You’ll want to read the entire thread for context.)


Shay Rojansky notes that EF Core is now fully annotated for C# reference nullability. As a whole, fully annotating nullability across .NET should be complete in .NET 6.


I’ve been intrigued this week by Daniel Terhorst-North writing about why he feels “every single element of SOLID is wrong.” It’s quite the statement, but the more you read, the less revolutionary it sounds. Things change and evolve. Whether it’s SOLID or any other prescribed “best practice,” I’ve learned to take things with a grain of salt and consider the tradeoffs.


I’ve been working with many scheduled GitHub Actions to automate much of how I put together this newsletter every week (like adding links to a persistent store and generating my Markdown file). With scheduling tasks, as in timed Azure Functions triggers, CRON is still king. It’s nice that GitHub Actions translates CRON syntax for you on hover, but I’m still going to mess it up.

What saves me every time is the crontab.guru site. (I’m not being asked to say this. I’m just a fan.) You can edit a CRON expression and easily see how it looks for a CRON amateur. You can also hit quick links with examples ready to go, like crontab.guru/every-day-8am.

crontab.guru

🌎 Last week in the .NET world

🔥 The Top 4

📢 Announcements

📅 Community and events

🌎 Web development

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #41: 🎁 Your monthly preview fix has arrived ]]> https://www.daveabrock.com/2021/03/20/dotnet-stacks-41/ 608c3e3df4327a003ba2fe98 Fri, 19 Mar 2021 19:00:00 -0500 Did you know e-mail turns 50 this year? Do you remember when in 2004, Bill Gates pledged to rid the world of spam emails in two years?

  • One big thing: .NET 6 Preview 2 is here
  • The little things: Azure Functions .NET 5 support goes GA, new .NET APIs coming, event sourcing examples
  • Last week in the .NET world

One big thing: .NET 6 Preview 2 is here

Last week, Microsoft released .NET Preview 2. From here on out, the .NET team will ship a new preview every month until .NET 6 goes live in November 2021. As a reminder, .NET 6 is an LTS release, meaning Microsoft will support it for three years.

A big focus for .NET 6 is improving the inner loop experience and performance: maximizing developer productivity by optimizing the tools we use. While it’s early Stephen Toub writes that the team has trimmed overheads when running dotnet new, dotnet build, and dotnet run. This includes fixing issues where tools were unexpectedly JIT’ing and changing the ASP.NET Razor compiler to use a Roslyn source generator to avoid extra compilation steps. A chart of the drastic build time improvements shows a clean build of Blazor Server decreasing from over 3600 milliseconds to around 1500 milliseconds. If you bundle this with a best-in-class hot reload capability, developers have a lot to be excited about with .NET 6.

In .NET 6, you’ll be hearing a lot about MAUI (Multi-Platform App UI), which is the next iteration of Xamarin.Forms. With MAUI, Xamarin developers can use the latest .NET SDKs for the apps they build. If you aren’t a Xamarin developer, you can still take advantage of MAUI: for example, using MAUI Blazor apps can run natively on Windows and macOS machines. With Preview 2, Microsoft enabled a single-project experience that reduces overhead when running Android, iOS, and macOS apps.

While .NET library improvements don’t come with the same fanfare, a couple of improvements caught my eye. The System.Text.Json library now has a ReferenceHandler.IgnoreCycles option that allows you to ignore cycles when you serialize a complex object graph—a big step for web developers. Little by little, Microsoft’s Newtonsoft comparison chart is getting better. Additionally, a new PriorityQueue<TElement, TPriority> collection enables you to add new items with a value and a priority.

Aside from the Razor performance improvements, ASP.NET Core now has support for custom event arguments in Blazor, CSS isolation for MVC and Razor Pages, and the ability to preserve prerendered state in Blazor apps. (And no, AOT is not ready yet—the comment CTRL + F “AOT” 0 results, closes the tab from the last preview is the hardest I’ve laughed in a while.) In Entity Framework Core land, the team is working on preserving the synchronization context in SaveChangesAsync, flexible free-text search, and smoother integration with System.Linq.Async.

If you want the full details on what’s new in Preview 2, Richard Lander has it all. You can also check out what’s new in ASP.NET Core and Entity Framework Core as well.

The little things: Azure Functions .NET 5 support goes GA, new .NET APIs coming, event sourcing examples

I’ve mentioned this a few times before, but the Azure Functions team has developed a new out-of-process worker that will help with quicker support of .NET versions. Here’s the gist of why they are decoupling a worker from the host, from Anthony Chu:

The Azure Functions host is the runtime that powers Azure Functions and runs on .NET and .NET Core. Since the beginning, .NET and .NET Core function apps have run in the same process as the host. Sharing a process has enabled us to provide some unique benefits to .NET functions, most notably is a set of rich bindings and SDK injections. … However, sharing the same process does come with some tradeoffs. In earlier versions of Azure Functions, dependencies could conflict. While flexibility of packages was largely addressed in Azure Functions V2 and beyond, there are still restrictions on how much of the process is customizable by the user. Running in the same process also means that the .NET version of user code must match the .NET version of the host. These tradeoffs of sharing a process motivated us to choose an out-of-process model for .NET 5.

A few weeks ago, I wrote about how you can use this with .NET 5 today, but it was very much in preview and involved a lot of manual work. This week, it became production-ready. .NET 6 will support both the in-process and out-of-process options so that .NET Core 3.1 can remain supported. Long term (for .NET 7 and beyond), the team hopes to move new feature capabilities to the out-of-process worker completely—allowing support of .NET 7 on Day 1 from only the isolated worker.


The .NET team approved a proposal for a new Timer API. As ASP.NET Core architect David Fowler notes in the issue, the current implementation can incur overlapping callbacks (which aren’t async). Additionally, he notes it always captures the execution context, which can cause problems for long-lived operations. How will this new API help? It pauses when user code is executing while resuming the next period when it ends, can be stopped using a CancellationToken, and doesn’t capture an execution context. If you’re only firing a timer once, Task.Delay will continue to be a better option.

Also, .NET is getting a new Task.WaitAsync API. This appears to replace WhenAny, allowing Microsoft to assure us that naming things is still the hardest part of computer science. I say call it WhenAsync. If you look at the name hard enough after a few drinks, WhenAny is still there.


If you’re into event sourcing, check out Oskar Dudycz’s EventSourcing.NetCore repo. It’s full of nice tutorials and practical examples. That’s all.


Lastly, a nice tip on how you can tell when a site’s static file was last updated (even if several factors can make this unpredictable).


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Blast Off with Blazor: Add a shared dialog component ]]> https://www.daveabrock.com/2021/03/17/blast-off-blazor-add-dialog/ 608c3e3df4327a003ba2fe97 Tue, 16 Mar 2021 19:00:00 -0500 So far in our series, we’ve walked through the intro, wrote our first component, dynamically updated the HTML head from a component, isolated our service dependencies, worked on hosting our images over Azure Blob Storage and Cosmos DB, built a responsive image gallery, and implemented prerendering, and built a search-as-you-type box.

With our card layout all set and searchable, we can now build a shared dialog component. In this dialog, users can click a card to get more details about the image. The dialog will display the title, date, an explanation, the photo itself, and a link where users can open the image from my public Azure Blob Storage container.

The new dialog

While building a dialog yourself seems straightforward, many nuances can make it a pain to write yourself. You’ve got to apply CSS effects (like dim the background when it’s opened), make it scrollable, have it adapt to multiple screen sizes, allow keyboard input (like closing the dialog when you hit the Esc key), and so on. While there are many great options out there, I decided to use the MudBlazor component library.

Let’s get started. As always, my project is out on GitHub.

Install and configure MudBlazor

To install MudBlazor, I ran the following dotnet CLI command from my BlastOff.Client project:

dotnet add package MudBlazor

To prevent manual imports across the component files, I added a single @using at the bottom of my _Imports.razor file.

@using MudBlazor

Then, in Program.cs in the BlastOff.Client project, add the following services:

using Microsoft.AspNetCore.Components.WebAssembly.Hosting;
using Microsoft.Extensions.DependencyInjection;
using System;
using System.Threading.Tasks;
using MudBlazor.Services;

namespace BlastOff.Client
{
    public class Program
    {
        public static async Task Main(string[] args)
        {
            var builder = WebAssemblyHostBuilder.CreateDefault(args);
            builder.RootComponents.Add<App>("#app");

            // other stuff not relevant to this post

            builder.Services.AddMudServices();
            builder.Services.AddMudBlazorDialog();

            await builder.Build().RunAsync();
        }
    }
}

In the project’s wwwroot/index.html, add MudBlazor’s CSS in the <head> tag and its JS before the end of the <body> tag. (By the way, this is all in the MudBlazor documentation.)

<!DOCTYPE html>
<html>
    <head>
        <!-- Condensed <head> -->
        <link href="BlastOff.Client.styles.css" rel="stylesheet">
        <link href="_content/MudBlazor/MudBlazor.min.css" rel="stylesheet" />
    </head>

    <body>
    <!-- Condensed <body> -->
    <script src="_framework/blazor.webassembly.js"></script>
    <script src="_content/MudBlazor/MudBlazor.min.js"></script>
    </body>
</html>

To complete MudBlazor setup, add a MudThemeProvider and MudDialogProvider component in the Shared/MainLayout.razor file. You’ll notice I can also pass global dialog parameters, like FullWidth and MaxWidth.

@inherits LayoutComponentBase
<NavBar />

<div>
    <div class="section columns">
        <main class="column">
            <MudThemeProvider/>
            <MudDialogProvider
                FullWidth="true"
                MaxWidth="MaxWidth.Small"/>
            @Body
        </main>
    </div>
</div>

Add the shared dialog component

With MudBlazor set up and configured, let’s add a new dialog component in Shared/ImageDialog.razor.

In the @code block, we first need to wire up Submit and Cancel events. The Submit will be bound to a simple Ok dialog result, and the Cancel will assume the default behavior. MudBlazor passes down a CascadingParameter of a MudDialog. I’m also passing down ImageDetails as a parameter containing the data I’m using to populate the dialog.

@code {
    [CascadingParameter] MudDialogInstance MudDialog { get; set; }
    [Parameter] public Image ImageDetails { get; set; }

    void Submit() => MudDialog.Close(DialogResult.Ok(true));
    void Cancel() => MudDialog.Cancel();
}

In the markup, the ImageDetails model will live inside a MudDialog and the associated DialogContent and, finally, a MudContainer. After some simple Tailwind CSS styling, I’m using ImageDetails properties to populate my dialog.

<MudDialog>
    <DialogContent>
        <MudContainer Style="max-height: 500px; overflow-y: scroll">
            <div class="text-gray-600 text-md-left font-semibold tracking-wide">
                @ImageDetails.PrettyDate
            </div>
            <div class="text-xs p-4">
                @ImageDetails.Explanation
            </div>
            <div class="p-4">
                <a href="@ImageDetails.Url" target="_blank">See high-quality image <i class="fa fa-external-link" aria-hidden="true"></i></a>
            </div>
            <img src="@ImageDetails.Url" />
        </MudContainer>
    </DialogContent>
    <DialogActions>
        <MudButton OnClick="Cancel">Cancel</MudButton>
        <MudButton Color="Color.Primary" OnClick="Submit">Ok</MudButton>
    </DialogActions>
</MudDialog>

Add dialog to ImageCard

With the shared dialog created, now let’s add it to our existing ImageCard component. For the markup, we can inject an IDialogService and also bind an OpenDialog function to an @onclick event.

@inject IDialogService DialogService
<div class="image-container m-6 rounded overflow-hidden shadow-lg"
     @onclick="OpenDialog">

    <!-- All the existing markup -->
</div>

The @code block includes the ImageDetails as a parameter to send to the dialog. Then, in OpenDialog, pass the ImageDetails into DialogParameters, and show the dialog.

@code
{
    [Parameter]
    public Image ImageDetails { get; set; }

    private void OpenDialog()
    {
        var parameters = new DialogParameters {{"ImageDetails", ImageDetails}};
        DialogService.Show<ImageDialog>(ImageDetails.Title, parameters);
    }
}

Bonus: Use CSS isolation for image card hover effect

I added a hover effect, where image cards get just a little bit bigger as you hover over them. It’s a simple and subtle way to tell users, Hey, you can click on this.

Transform the image card with CSS

I’ve written a few times about Blazor CSS isolation. You can check out the links for details, but it allows you to scope styles to a specific component only.

To do this, I created an ImageCard.razor.css file. The style makes the image card 10% bigger on hover.

.image-container:hover {
    transform: scale(1.10)
}

Mission complete!

Wrap up

In this post, we used the MudBlazor component library to create a shared dialog component. We showed how to install and configure the library, create the component, and add it to our existing solution. As a bonus, we used CSS isolation to apply a nice hover effect.

]]>
<![CDATA[ Use C# to upload files to a GitHub repository ]]> https://www.daveabrock.com/2021/03/14/upload-files-to-github-repository/ 608c3e3df4327a003ba2fe96 Sat, 13 Mar 2021 18:00:00 -0600 If you need to upload a file to GitHub, you can do it easily from github.com. However, that doesn’t scale. When it comes to doing it on a repeated basis or facilitating automation, you’ll want to take a programmatic approach. While GitHub does have a REST API, C# developers can take advantage of the Octokit library. This library saves you a lot of time—you can take advantage of dynamic typing, get data models automatically, and not have to construct HttpClient calls yourself.

This post will show you how to use the Octokit library to upload a Markdown file to a GitHub repository.

Before we get started, download the Octokit library from NuGet. You can do this in several different ways, but the simplest is using the dotnet CLI. From the path of your project, execute the following from your favorite terminal:

dotnet add package Octokit

Create the client

To get started, you’ll need to create a client so that Octokit can connect to GitHub. To do that, you can instantiate a new GitHubClient. The GitHubClient takes a ProductHeaderValue, which can be any string—it’s so GitHub can identify your application. GitHub doesn’t allow API requests without a User-Agent header, and Octokit uses this value to populate one for you.

//using Octokit;

var gitHubClient = new GitHubClient(new ProductHeaderValue("MyCoolApp"));

With this single line of code, you can now access public GitHub information. For example, you can use a web browser to get a user’s details. Here’s what you get for api.github.com/users/daveabrock:

{
    "login": "daveabrock",
    "id": 275862,
    "node_id": "MDQ6VXNlcjI3NTg2Mg==",
    "avatar_url": "https://avatars.githubusercontent.com/u/275862?v=4",
    "gravatar_id": "",
    "url": "https://api.github.com/users/daveabrock",
    "html_url": "https://github.com/daveabrock",
    "followers_url": "https://api.github.com/users/daveabrock/followers",
    "following_url": "https://api.github.com/users/daveabrock/following{/other_user}",
    "gists_url": "https://api.github.com/users/daveabrock/gists{/gist_id}",
    "starred_url": "https://api.github.com/users/daveabrock/starred{/owner}{/repo}",
    "subscriptions_url": "https://api.github.com/users/daveabrock/subscriptions",
    "organizations_url": "https://api.github.com/users/daveabrock/orgs",
    "repos_url": "https://api.github.com/users/daveabrock/repos",
    "events_url": "https://api.github.com/users/daveabrock/events{/privacy}",
    "received_events_url": "https://api.github.com/users/daveabrock/received_events",
    "type": "User",
    "site_admin": false,
    "name": "Dave Brock",
    "company": null,
    "blog": "daveabrock.com",
    "location": "Madison, WI",
    "email": null,
    "hireable": null,
    "bio": "Software engineer, Microsoft MVP, speaker, blogger",
    "twitter_username": "daveabrock",
    "public_repos": 55,
    "public_gists": 2,
    "followers": 63,
    "following": 12,
    "created_at": "2010-05-13T20:05:05Z",
    "updated_at": "2021-03-13T19:05:32Z"
}

(Fun fact: it’s fun to look at the id values to see how long a user has been with GitHub—I was the 275,862nd registered user, a few years after GitHub co-founder Tom Preston-Warner, who has an id of 1.)

To get this information programmatically, you can use the User object:

//using Octokit;

var gitHubClient = new GitHubClient(new ProductHeaderValue("MyCoolApp"));
var user = await gitHubClient.User.Get("daveabrock");
Console.WriteLine($"Woah! Dave has {user.PublicRepos} public repositories.");

That was quick and easy, but not very much fun. To modify anything in a repository you’ll need to authenticate.

Authenticate to the API

You can authenticate to the API using one of two ways: a basic username/password pair or an OAuth flow. Using OAuth (from a generated personal access token) is almost always a better approach, as you won’t have to store a password in the code, and you can revoke it as needed. What happens when a password changes? Bad, bad, bad.

Instead, connect by passing in a personal access token (PAT). From your profile details, navigate to Developer settings > Personal access tokens, and create one. You’ll want to include the repo permissions. After you create it, copy and paste the token somewhere. We’ll need it shortly.

Generate GitHub personal access token

With the generated PAT, here’s how you authenticate to the API. (Note: Be careful with your token. In real-world scenarios, make sure to store it somewhere safe and access it from your configuration.)

//using Octokit;

var gitHubClient = new GitHubClient(new ProductHeaderValue("MyCoolApp"));
gitHubClient.Credentials = new Credentials("my-new-personal-access-token");

Add a new file to the repository

With that in place, I’m using a StringBuilder to create a simple Markdown file. Here’s what I have so far:

//using Octokit;

var gitHubClient = new GitHubClient(new ProductHeaderValue("MyCoolApp"));
gitHubClient.Credentials = new Credentials("my-new-personal-access-token");

var sb = new StringBuilder("---");
sb.AppendLine();
sb.AppendLine($"date: \"2021-05-01\"");
sb.AppendLine($"title: \"My new fancy post\"");
sb.AppendLine("tags: [csharp, azure, dotnet]");
sb.AppendLine("---");
sb.AppendLine();

sb.AppendLine("# The heading for my first post");
sb.AppendLine();

Because I need to pass in a string to create my file, I’ll use sb.ToString(), to pass it in. I can call the CreateFile method to upload a file now.

//using Octokit;

var gitHubClient = new GitHubClient(new ProductHeaderValue("MyCoolApp"));
gitHubClient.Credentials = new Credentials("my-new-personal-access-token");

var sb = new StringBuilder("---");
sb.AppendLine();
sb.AppendLine($"date: \"2021-05-01\"");
sb.AppendLine($"title: \"My new fancy updated post\"");
sb.AppendLine("tags: [csharp, azure, dotnet]");
sb.AppendLine("---");
sb.AppendLine();

sb.AppendLine("The heading for my first post");
sb.AppendLine();

var (owner, repoName, filePath, branch) = ("daveabrock", "daveabrock.github.io", 
        "_posts/2021-05-02-my-new-post.markdown", "main");

await gitHubClient.Repository.Content.CreateFile(
     owner, repoName, filePath,
     new CreateFileRequest($"First commit for {filePath}", sb.ToString(), branch));    

You can fire and forget in many cases, but it’s good to note this method returns a Task<RepositoryChangeSet> which gives you back commit and content details.

Update an existing file

If you try to execute CreateFile on an existing file, you’ll get an error that the SHA for the commit doesn’t exist. You’ll need to fetch the SHA from the result first.

var sha = result.Commit.Sha;

await gitHubClient.Repository.Content.UpdateFile(owner, repoName, filePath,
    new UpdateFileRequest("My updated file", sb.ToString(), sha));

In many scenarios, you won’t be editing the file right after you created it. In these cases, get the file details first:

var fileDetails = await gitHubClient.Repository.Content.GetAllContentsByRef(owner, repoName,
    filePath, branch);

var updateResult = await gitHubClient.Repository.Content.UpdateFile(owner, repoName, filePath,
    new UpdateFileRequest("My updated file", sb.ToString(), fileDetails.First().Sha));

What about base64?

The CreateFile call also has a convertContentToBase64 boolean flag, if you’d prefer. For example, I can pass in an image’s base64 string and set convertContentToBase64 to true.

string imagePath = @"C:\pics\headshot.jpg";
string base64String = GetImageBase64String(imagePath);

var result = await gitHubClient.Repository.Content.CreateFile(
     owner, repoName, filePath,
     new CreateFileRequest($"First commit for {filePath}", base64String, branch, true));

static string GetImageBase64String(string imgPath)
{
    byte[] imageBytes = System.IO.File.ReadAllBytes(imgPath);
    return Convert.ToBase64String(imageBytes);
}

Wrap up

I showed you how to use the Octokit C# library to upload a new file to GitHub in this post. We also discussed how to update existing files and pass base64 strings to GitHub as well. Thanks for reading, and have fun!

]]>
<![CDATA[ The .NET Stacks #40: 📚 Ignite is in the books ]]> https://www.daveabrock.com/2021/03/13/dotnet-stacks-40/ 608c3e3df4327a003ba2fe95 Fri, 12 Mar 2021 18:00:00 -0600 We’ve made it to 40. Does this mean we’re over the hill?

  • One big thing: Ignite recap
  • The little things: a new upgrade assistant, Okta buys Auth0, hash codes
  • Last week in the .NET world

One big thing: Ignite recap

Microsoft held Ignite this week. If you’re looking for splashy developer content, don’t hold your breath—that’s more suited for Build. Still, it’s an excellent opportunity to see what Microsoft is prioritizing. While most of this section won’t be pure .NET content, it’s still valuable (especially if you deploy to Azure). You can read a rundown of key announcements in the Book of News, and you can also hit up the YouTube playlist.

A big part of the VR-friendly keynote involved Satya Nadella sharing the virtual stage with Microsoft Mesh, the new mixed-reality platform. From the Book of News, it “powers collaborative experiences with a feeling of presence–meaning users feel like they are physically present with one another even when they are not.” Folks can interact with 3D objects and other people through Mesh-enabled apps across a wide variety of devices. If this is where you say, Dave, this a .NET newsletter - noted. However, as a technology that received attention at the Ignite keynote, it bears mention. (Especially with its applications in today’s pandemic world.)

If you’re into Azure Cognitive Services, Microsoft rolled out semantic capabilities. Also, their Form Recognizer service now has support for ID documents and invoice extraction. There are new Enterprise tiers for Azure Cache for Redis, and Azure Cosmos DB now has continuous backup and point-in-time support.

Azure Communication Services, announced last fall, is hitting general availability in a few weeks. If you aren’t familiar with it, Azure Communication Services allows developers to integrate voice, video, and text communication with their apps. (It certainly makes the relationship with Twilio a little more interesting.) The power of having these capabilities in Azure gives it the ability to integrate with other Azure products and services. For example, Azure Bot Service has a new telephony channel built on Azure Communication Services. If you’re writing a bot, you can leverage the AI from Cognitive Services and integrate it with your phone and messaging capabilities. These capabilities provide a distinct advantage to other services that focus on one need.

The little things: a new upgrade assistant, Okta buys Auth0, hash codes

This week, Microsoft announced the .NET Upgrade Assistant, a global tool that promises to help you upgrade your .NET Framework-based applications to .NET 5. It’s a global CLI tool that offers a guided experience for “incrementally updating your applications.” The assistant determines which projects you need to upgrade and recommends the order to be upgraded. Unlike tools like try-convert, you can see recommendations and choose how to upgrade your code.

The .NET Upgrade Assistant is not a tool meant to upgrade with a click of a button—you’ll likely need to do some manual work. It does promise to make your upgrade experience a lot easier. You can check out the GitHub repo for details as well. Side note: I’ll be releasing a detailed post on the .NET Upgrade Assistant by month’s end.


Last week, Okta bought Auth0 for $6.5 billion. (I think I need to start an API company.)

It makes sense: at the risk of oversimplifying, Okta provides IAM to companies that use it for SSO. Auth0 works on the app level, allowing developers API access to SSO. Okta has a reputation for being very enterprise-y, and Auth0 is known as a company with a startup feel that provides a developer-first experience. With this news, IdentityServer v4 now a paid product, and the Azure AD offerings, you have many choices when it comes to integrating auth in your .NET apps.


Did you know about HashCode.Combine? If you’re using custom GetHashCode implementations and C# 9 records don’t suit your needs it makes overriding hash codes easier.


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Ask About Azure: Why do resource groups need a location? ]]> https://www.daveabrock.com/2021/03/08/ask-azure-resource-group-locations/ 608c3e3df4327a003ba2fe94 Sun, 07 Mar 2021 18:00:00 -0600 Ask About Azure is a weekly-ish beginner-friendly series that answers common questions on Azure development and infrastructure topics.

In Azure, a resource group is a logical container that you use to hold related resources. When you organize related resources together, it makes it easier to perform common operations on them. For example, you might have a resource group to host resources for a singular app like the web front end, APIs, and so on. You should deploy, monitor, update, and delete resources in a resource group together. Think of it as a family: you’ll have your ups and downs, but life is a lot easier when you do it together. (Mostly.)

A resource group is just a logical container, and the resources in your resource group can belong to various locations. The answer is pretty straightforward: you may want to have specific locations because of user requirements or maybe that certain Azure resources are only available in certain regions. The more interesting question is: if resource groups are logical and you can specify various locations for their resources, why do you need a specific location for your resource group?

If you try to create a resource group and leave out the location, you can’t create it.

If you create a resource group in the Azure Portal, you’ll see a required Region field:

Create resource group in the portal

If you go the scripting route, you’ll see it’s required in the Azure CLI:

az group create --name MyResourceGroup --location centralus

In PowerShell, it’s required as well:

New-AzResourceGroup -Name MyResourceGroup -Location centralus

So why?

The resource group itself stores metadata about the resources inside it. If you’re a .NET developer, think of something like a Visual Studio solution (.sln) file: it has information about the project in the solution, the Visual Studio version you’re using, and so on. Instead of projects in a solution file, think of resources in a resource group.

Anyway, this metadata resides in the location you specified when you created the resource group. Your location is important when it comes to compliance. Your company or government may have specific rules, regulations, or laws about where you must store certain data. This metadata benefits you, too: it allows you to better manage your resources for cost management and security and access controls. For example, you can assign allocation tags to your resource group or orchestrate deployments for ARM templates. In terms of security, it’s common for teams to have their access scoped to a specific resource group to prevent impacting other resource groups with their changes. Information like this can reside in your resource group’s metadata.

How does availability play into this? If my resource group is in Central US, but my Azure Function app is in East US 2, an East US 2 failure means the application won’t be available (unless you’ve made failure considerations, which you should). But what if Central US has an outage?

When this happens, your Function app in East US 2 will still be online, but you won’t be able to deploy new resources until Central US goes online. As adding resources (or a “project”) definitely would force a change to the metadata, you won’t be able to do it when there’s an outage. Typically, updates to existing resources shouldn’t impact you. In these situations, you could wait until the outage clears. Alternately, create your resource in a new resource group, then move it to the existing resource group when the outage clears.

References

]]>
<![CDATA[ The .NET Stacks #39: 🔥 Is Dapr worth the hype? ]]> https://www.daveabrock.com/2021/03/06/dotnet-stacks-39/ 608c3e3df4327a003ba2fe93 Fri, 05 Mar 2021 18:00:00 -0600 Buckle up! It’s been a busy few weeks. With Ignite this week, things aren’t slowing down anytime soon. I’ve got you covered. Let’s get to it.

  • One big thing: Is Dapr worth the hype?
  • The little things: Blazor Desktop, .NET 5 on Azure Functions, ASP.NET Core health checks
  • Last week in the .NET world

One big thing: Is Dapr worth the hype?

I mentioned Dapr in passing last week, but its release competed with the release of .NET 6 Preview 1. I’ve been spending the last week trying to understand what exactly it is, and the Dapr for .NET Developers e-book has been mostly helpful. With Ignite kicking off on Tuesday, you’re going to start hearing a lot more about it. (If you don’t work with microservices, feel free to scroll to the next section.)

Dapr is a runtime for your microservice environment—in .NET terms, you could call it a “BCL for the cloud.” It provides abstractions for you to work with the complexities of distributed applications. Dapr calls these pluggable components. Think about all the buckets your distributed systems have: authentication/authorization, pub/sub messaging, state management, invoking services, observability, secrets management, and so on. Your services can call these pluggable components directly, and Dapr deals with calling all these dependencies on your behalf. For example, to call Redis you call the Dapr state management API. You can call Dapr from its native REST or gRPC APIs and also language-specific SDKs. I love the idea of calling pub/sub over HTTP and not haggling with specific message broker implementations.

Dapr uses a sidecar architecture, enabling it to run in a separate memory process. Dapr says this provides isolation—Dapr can connect to a service but isn’t dependent on it. Each service can then have its own runtime environment. It can run in your existing environment, the edge, and Kubernetes (it’s built for containerized environments). While shepherded for Microsoft, it’s pretty agnostic and isn’t only built for Azure (but that might be lost in the Microsoft messaging). With Dapr, services communicate through encrypted channels, service calls are automatically retried when transient errors occur, and automatic service discovery reduces the amount of configuration needed for services to find each other.

While Dapr is service mesh-like, it is more concerned with handling distributed application features and is not dedicated network infrastructure. Yes, Dapr is a proxy. But if you’re in the cloud, breaking news: you’re using proxies whether you like it or not.

Dapr promises to handle the tricky parts for you through a consistent interface. Sure, you can do retries, proxies, network communication, and pub/sub on your own, but it’s probably a lot of duct tape and glue if you have a reasonably complex system.

With the release of Dapr v1.0, it’s production-ready. Will this latest “distributed systems made easy” offering solve all your problems? Of course not. Dapr uses the calls over the highly-performant gRPC, but that’s a lot of network calls. The line To increase performance, developers can call the Dapr building blocks with gRPC needs some unpacking. The team discusses low latency but will REST be enough for all its chatter? Are you ready to hand your keys to yet another layer of abstraction? Would you be happy with a cloud within your cloud? Does your app have scale and complexity requirements that make this a necessity? Are you worried about leaky abstractions?

There’s a lot going on here, and I plan to explore more. If you wanted to learn more about Dapr, I hope this gets the ball rolling for you.


The little things: Blazor Desktop, .NET 5 on Azure Functions, health checks

With the release of .NET 6 Preview 1 last week, one of the most interesting takeaways was the mention of Blazor desktop apps. (It wasn’t part of Preview 1, but a preview of what’s to come for .NET 6.) As if we didn’t have enough desktop dev options to worry about—WPF, UWP, WinUI, .NET MAUI, WinForms, and so on–where does this even fit?

Here’s what Richard Lander wrote:

Blazor has become a very popular way to write .NET web apps. We first supported Blazor on the server, then in the browser with WebAssembly, and now we’re extending it again, to enable you to write Blazor desktop apps. Blazor desktop enables you to create hybrid client apps, which combine web and native UI together in a native client application. It is primarily targeted at web developers that want provide rich client and offline experiences for their users.

Initially, Blazor Desktop will not utilize WebAssembly. Built on top of the .NET MAUI platform coming with .NET 6, it’ll use that stack to use native containers and controls. You can choose to use Blazor for your entire desktop app or only for targeted functionality—in the blog post, Lander mentions a Blazor-driven user profile page integrated with an otherwise native WPF app.

It appears this will work similarly to Electron. There will be a WebView control responsible for rendering content from an embedded Blazor web server, which can serve both Blazor components and other static assets. If you’re hoping to see one unifying platform for native development, I wouldn’t hold your breath—but if you like working with Blazor (especially if you’re apprehensive of XAML), it’ll be worth a try.


In this week’s shameless plug, I wrote about using Azure Functions with .NET 5. A new out-of-process model is in preview. Here’s the story behind it:

Traditionally, .NET support on Azure Functions has been tied to the Azure Functions runtime. You couldn’t just expect to use the new .NET version in your Functions as soon as it was released. Because .NET 5 is not LTS, and Microsoft needs to support specific releases for extended periods, they can’t upgrade the host to a non-LTS version because it isn’t supported for very long (15 months from the November 2020 release). This doesn’t mean you can’t use .NET 5 with your Azure Functions. To do so, the team has rolled out a new out-of-process model that runs a worker process along the runtime. Because it runs in a separate process, you don’t have to worry about runtime and host dependencies. Looking long-term: it provides the ability to run the latest available version of .NET without waiting for a Functions upgrade.

As its in early preview, it does take some work to get going with it, but ultimately it’s great news for getting Azure Functions to support new .NET releases much sooner.


At work, we had a hackathon of sorts to build an authentication API to validate Azure runbooks, and roll it out to production in two weeks (a mostly fun exercise). I hadn’t built out an ASP.NET Core Web API from scratch in awhile, and implemented health checks.

Of course, I knew you could add an endpoint in your middleware for a simple check:

public void Configure(IApplicationBuilder app)
{
    app.UseRouting();
    app.UseEndpoints(endpoints =>
    {
        endpoints.MapHealthChecks("/api/healthcheck");
    });
}

There’s a ton of configuration options at your disposal, and after reading Kevin Griffin’s timely blog post, I learned about the AspNetCore.Diagnostics.HealthChecks library. Did you know about this? Where have I been? You can plug into health check packages for widely used services like Kubernetes, Redis, Postgres, and more. There’s even a UI you can leverage. In case you weren’t aware—maybe you were—I hope you find it helpful.


🌎 Last week in the .NET world

🔥 The Top 4

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #38: 📢 I hope you like announcements ]]> https://www.daveabrock.com/2021/02/27/dotnet-stacks-38/ 608c3e3df4327a003ba2fe91 Fri, 26 Feb 2021 18:00:00 -0600 It was very busy last week! Let’s get right to it.

  • One big thing: .NET 6 gets Preview 1
  • The little things: Dapr v1.0 is released, Azure Static Web Apps, more SolarWinds findings
  • Dev Discussions: Cecil Phillip
  • Last week in the .NET world

One big thing: .NET 6 gets Preview 1

It seems like just yesterday .NET 5 went live. (It was November, for the record.) This week, the .NET team announced .NET 6 Preview 1, to be officially released in November 2021. You can also check out the ASP.NET Core and EF Core 6 announcements. .NET 6 will be an enterprise-friendly LTS release, meaning it will be supported for three years.

As expected, a main focus will be integrating Xamarin into the “One .NET” model by way of the .NET Multi-Platform App UI (MAUI). With Preview 1, Xamarin developers can develop Android and iOS with MAUI. Future previews will address support for macOS and Windows. Another big goal is improving the inner loop experience. You can check out themesof.net to see what’s being prioritized. AOT support is not rolled out yet, but you can head over to the NativeAOT repo to get up to speed.

For ASP.NET Core 6, they’re prioritizing work on hot reload, micro APIs, AoT compilation, updated SPA support, and HTTP/3. For Preview 1, the team now supports IAsyncDisposable in MVC, offers a new DynamicComponent for Blazor to render a component based on type, and applying more nullability annotations.

For EF Core 6, the team is busy with support for SQL Server sparse columns, required property validations for not null for the in-memory database, improved SQL Server translation for IsNullOrWhitespace, a Savepoints API, and much more.

It’s sure to be a busy eight months. Stay tuned.

The little things: Dapr v1.0 is released, Azure Static Web Apps, more SolarWinds findings

This week, Dapr hit GA with v1.0. The next in a long line of technologies that promise to make distributed systems easier, we’ll see if this one sticks. While it’s hard for folks to pin down what exactly Dapr is—no, seriously—it uses pluggable components to remove the complexity of low-level plumbing involved with developing distributed applications.

Here’s more detail from the new Dapr for .NET Developers e-book:

It provides a dynamic glue that binds your application with infrastructure capabilities from the Dapr runtime. For example, your application may require a state store. You could write custom code to target Redis Cache and inject it into your service at runtime. However, Dapr simplifies your experience by providing a distributed cache capability out-of-the-box. Your service invokes a Dapr building block that dynamically binds to Redis Cache component via a Dapr configuration. With this model, your service delegates the call to Dapr, which calls Redis on your behalf. Your service has no SDK, library, or direct reference to Redis. You code against the common Dapr state management API, not the Redis Cache API.

In my talk last week on Azure Static Web Apps, it was nice to see Anthony Chu attend. He’s the PM for Azure Functions and Azure Static Web Apps. We asked what’s new with Azure Static Web Apps—he talked about a new tier, a CLI for a better local development experience, root domain support (right now it supports custom DNS with https://mysite.com but not https://www.mysite.com), an SLA, and more. There’s no firm dates on these things, but it looks like improvements are on the way before Microsoft takes it out of preview.

We heard a little more about how the SolarWinds hack hit Microsoft:

Microsoft said its internal investigation had found the hackers studied parts of the source code instructions for its Azure cloud programs related to identity and security, its Exchange email programs, and Intune management for mobile devices and applications. Some of the code was downloaded, the company said, which would have allowed the hackers even more freedom to hunt for security vulnerabilities, create copies with new flaws, or examine the logic for ways to exploit customer installations … Microsoft had said before that the hackers had accessed some source code, but had not said which parts, or that any had been copied.

Microsoft blogged their “final update,” so that’s all we’ll hear about it. It looks like their defenses held up.

Dev Discussions: Cecil Phillip

I recently had a chance to catch up with Cecil Phillip, a Senior Cloud Advocate for Microsoft. He’s a busy guy: you might have seen him co-hosting The .NET Docs Show or the ON.NET Show. Cecil also does a lot with with microservices and distributed systems. I wanted to pick his brain on topics like Dapr, YARP, and Service Fabric. I hope you enjoy it.

Cecil Phillip profile photo
I’d love to hear about how you got in this field and ended up at Microsoft.

I’ll try to give you a condensed version. Back in Antigua, we didn’t have computers in school at the time. One day, my dad brought home a Compaq Presario to upgrade from using a typewriter that he used for working on his reports. Initially, I wasn’t allowed to “experiment” with the machine, so I’d explore it when he wasn’t around. I was so curious about what this machine could do, I wanted to know what every button did—and eventually, I got better at it than he was.

I went to college and got my CS degrees. I didn’t know I wanted to be a programmer. I didn’t know many examples of what CS folks did at the time. I got my first job working on ASP.NET Web Forms applications. After I got my green card, I spent a few years exploring different industries like HR, Finance, and Education. I taught some university courses in the evenings, and it was at that point I realized how much I loved teaching kids. Then one day, I saw a tweet about a new team at Microsoft, and I thought, “Why not?” I didn’t think I’d get the job, but I’d give it a try.

When it comes to microservices, there’s been a lot of talk about promoting the idea of “loosely coupled monoliths” when the overhead of microservices might not be worth it. What are your thoughts?

Somewhere along the way, having your application get labeled as a monolith became a negative thing. In my opinion, we’ve learned a lot of interesting patterns that we can apply to both monoliths and microservices. For some solutions and teams, having a monolith is the right option. We can now pair that with the additional benefits of modern patterns to get some additional resiliency for a stable application.

I’ve been reading and learning about YARP, a .NET reverse proxy. Why would I use this over something like nginx?

If you’re a .NET developer and want the ability to write custom rules or extensions for your reverse proxy/load balancer using your .NET expertise, then YARP is a great tool.

Azure Front Door seems reverse-proxyish—why should I choose YARP over this?

Azure Front Door is a great option if you’re running large-scale applications across multiple regions. It’s a hosted service with load balancing features, so you get things like SSL management, layer-7 routing, health checks, URL rewriting, and so on. YARP, on the other hand, is middleware for ASP.NET Core that implements some of those same reverse proxy concerns, but you’re responsible for your infrastructure and have to write code to enable your features. YARP would be more akin to something like HAProxy or nginx.

A lot of enterprises, like mine, use nginx for an AKS ingress controller. Will YARP ever help with something like that?

Maybe. 🙂

I know you’ve also been working with Dapr. I know it simplifies event management for microservices, but can you illustrate how it can help .NET developers and what pain points it solves?

The biggest selling point for Dapr is that it provides an abstraction over common needs for microservice developers. In the ecosystem today, there are tons of options to choose from to get a particular thing done— but with that comes tool-specific SDKs, APIs, and configuration. Dapr allows you to choose the combination of tools you want without needing to have your code rely on tool-specific SDKs. That includes things like messaging, secrets, service discovery, observability, actors, and so on.

How does Dapr compare with something like Service Fabric? Is there overlap, and do you see Dapr someday replacing it?

I look at Dapr and Service Fabric as two completely different things. Dapr is an open-source, cross-platform CLI tool that provides building blocks for making microservice development easier. Service Fabric is a hosted Azure service that helps with packaging and deploying distributed applications. Dapr can run practically anywhere—on any cloud, on-premises, and Kubernetes. You can even run it on Service Fabric clusters since Service Fabric can run processes. It can run containers and even contains a programming model that applications can use as a hard dependency. Dapr itself is a process that would run beside your application’s various instances to provide some communication abstractions.

What is your one piece of programming advice?

I think it’s important to spent time reading code. We should actually read more code than we write. There’s so we can learn and be inspired by exploring the techniques used by the developers that have come before us.

You can connect with Cecil Phillip on Twitter.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Use Azure Functions with .NET 5 ]]> https://www.daveabrock.com/2021/02/24/functions-dotnet-5/ 608c3e3df4327a003ba2fe90 Tue, 23 Feb 2021 18:00:00 -0600 A little while ago, I wrote a post showing off how to use Open API and HttpRepl with ASP.NET Core 5 Web APIs. Using a new out-of-process model, you can now use .NET 5 with Azure Functions.

Traditionally, .NET support on Azure Functions has been tied to the Azure Functions runtime. You couldn’t just expect to use the new .NET version in your Functions as soon as it was released. As Anthony Chu writes, an updated version of the host is also required. Because .NET 5 is not LTS, and Microsoft needs to support specific releases for extended periods, they can’t upgrade the host to a non-LTS version because it isn’t supported for very long (15 months from the November 2020 release).

This doesn’t mean you can’t use .NET 5 with your Azure Functions. To do so, the team has rolled out a new out-of-process model that runs a worker process along the runtime. Because it runs in a separate process, you don’t have to worry about runtime and host dependencies. Looking long-term: it provides the ability to run the latest available version of .NET without waiting for a Functions upgrade.

There’s now an early preview of the .NET 5 worker, with plans to be generally available in “early 2021.” I used the repository from my previous post to see how it works.

Note: Before embarking on this adventure, you’ll need the .NET 5 SDK installed as well as Azure Functions Core Tools (a version >= 3.0.3160).

Update local.settings.json

Update local.settings.json and change the FUNCTIONS_WORKER_RUNTIME to dotnet-isolated. This is likely a short-term fix, as the worker will be targeted to future .NET versions.

{
  "IsEncrypted": false,
  "Values": {
    "FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated",
    "AzureWebJobsStorage": ""
}

Update project file

You’ll need to update your project file to the following:

<PropertyGroup>
  <TargetFramework>net5.0</TargetFramework>
  <LangVersion>preview</LangVersion>
  <AzureFunctionsVersion>v3</AzureFunctionsVersion>
  <OutputType>Exe</OutputType>
  <_FunctionsSkipCleanOutput>true</_FunctionsSkipCleanOutput>
</PropertyGroup>

You might be wondering why the OutputType is exe. A .NET 5 Functions app is actually a .NET console app executable running in a separate process. Save that for Trivia Night. You’ll also note it uses the v3 host and uses a _FunctionsSkipCleanOutput flag to preserve important files.

You’ll also need to update or install quite a few NuGet packages. It’ll probably be quickest to copy and paste these into your project file, save it, and let the NuGet restore process take over.

<ItemGroup>
    <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.0.0-preview4" />
    <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.0.0-preview4" OutputItemType="Analyzer" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Http" Version="3.0.12" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Storage" Version="4.0.3" />
    <PackageReference Include="Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator" Version="1.2.0" />
    <PackageReference Include="System.Net.NameResolution" Version="4.3.0" />
</ItemGroup>

Lastly, you’ll need to include the following so the worker behaves correctly on Linux.

<Target Name="CopyRuntimes" AfterTargets="AfterBuild" Condition=" '$(OS)' == 'UNIX' ">
    <!-- To workaround a bug where the files aren't copied correctly for non-Windows platforms -->
    <Exec Command="rm -rf $(OutDir)bin/runtimes/* &amp;&amp; mkdir -p $(OutDir)bin/runtimes &amp;&amp; cp -R $(OutDir)runtimes/* $(OutDir)bin/runtimes/" />
</Target>

Add code to use EF in-memory database

In my previous post, I used the Entity Framework in-memory database to quickly get going. I can add it just as easily in my Azure Function.

In my Data folder, I have an ApiModels.cs file to store my models as C# 9 records, a SampleContext, and a SeedData class.

The ApiModels.cs:

using System.ComponentModel.DataAnnotations;

namespace FunctionApp.Data
{
    public class ApiModels
    {
        public record Band(int Id, [Required] string Name);
    }
}

My SampleContext.cs:

using Microsoft.EntityFrameworkCore;

namespace FunctionApp.Data
{
    public class SampleContext : DbContext
    {
        public SampleContext(DbContextOptions<SampleContext> options)
            : base(options)
        {
        }

        public DbSet<ApiModels.Band> Bands { get; set; }
    }
}

And here’s the SeedData class:

using System.Linq;

namespace FunctionApp.Data
{
    public class SeedData
    {
        public static void Initialize(SampleContext context)
        {
            if (!context.Bands.Any())
            {
                context.Bands.AddRange(
                    new ApiModels.Band(1, "Led Zeppelin"),
                    new ApiModels.Band(2, "Arcade Fire"),
                    new ApiModels.Band(3, "The Who"),
                    new ApiModels.Band(4, "The Eagles, man")
                );

                context.SaveChanges();
            }
        }
    }
}

Enjoy a better middleware experience

With the new worker, I can enjoy a better dependency injection experience—I can finally do away with all that [assembly: FunctionsStartup(typeof(Startup))] business. It feels more natural, like I’m working in ASP.NET Core.

In Program.cs, I added a function to seed my database as startup time:

private static void SeedDatabase(IHost host)
{
    var scopeFactory = host.Services.GetRequiredService<IServiceScopeFactory>();

    using var scope = scopeFactory.CreateScope();
    var context = scope.ServiceProvider.GetRequiredService<SampleContext>();

    if (context.Database.EnsureCreated())
    {
        try
        {
            SeedData.Initialize(context);
        }
        catch (Exception ex)
        {
            var logger = scope.ServiceProvideGetRequiredService<ILogger<Program>>();
            logger.LogError(ex, "A database seeding error occurred.");
        }
    }
}

In the Main method, I can inject services and work with my middleware. (More on that Debugger.Launch() later.)

using System;
using System.Threading.Tasks;
using Microsoft.Extensions.Configuration;
using System.Diagnostics;
using FunctionApp.Data;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Azure.Functions.Worker.Configuration;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Logging;

static async Task Main(string[] args)
{
#if DEBUG
    Debugger.Launch();
#endif
    var host = new HostBuilder()
        .ConfigureAppConfiguration(c =>
        {
            c.AddCommandLine(args);
        })
        .ConfigureFunctionsWorker((c, b) =>
        {
            b.UseFunctionExecutionMiddleware();
        })
        .ConfigureServices(s =>
        {
            s.AddSingleton<IHttpResponderService, DefaultHttpResponderService>();
            s.AddDbContext<SampleContext>(options =>
                options.UseInMemoryDatabase("SampleData"));
        })
        .Build();
    SeedDatabase(host);

    await host.RunAsync();
}

Write the function

With all that out of the way, I can write my Function. I started with a get, but all the other endpoints from the previous post should follow the same pattern.

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using FunctionApp.Data;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;

namespace FunctionApp
{
    public class BandsFunction
    {
        private readonly SampleContext _context;

        public BandsFunction(SampleContext context)
        {
            _context = context;
        }

        [FunctionName("BandsGet")]
        public async Task<List<ApiModels.Band>> Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
            HttpRequestData req) =>
                _context.Bands.ToList();
    }
}

This is a simple example, but as you dig a little more you’ll notice that in .NET 5 you’ll replace HttpRequest with HttpRequestData and ILogger with FunctionExecutionContext.

About the tooling

As with early preview capabilities, the tooling isn’t that far along. For example, right now you can’t expect to hit F5 in Visual Studio to debug your Function. To run the sample locally, you’ll need to use Azure Functions Core Tools from the command line.

From the prompt cd into the project directory and execute the following:

func host start --verbose

To debug in Visual Studio, uncomment the Debugger.Launch() lines in Program.cs then run the command.

For VS Code, you’ll need to install the Functions extension, select the Attach to .NET Functions launch task, then start debugging.

Wrap up

In this post, I introduced the new Functions worker, which allows you to use .NET 5 with your Azure Functions—and will allow more immediate release support in the future. We updated our project files, learned about the easier DI experience, and walked through the tooling experience. It takes some manual work to get this going, but tooling should improve throughout time.

Next, you can look into deploying your app to Azure. Right now, the team is warning you can only do this for Windows plans and it isn’t completely optimized yet.

Take a look at the GitHub repo for more details and for a sample app.

References

]]>
<![CDATA[ Dev Discussions: Cecil Phillip ]]> https://www.daveabrock.com/2021/02/21/dev-discussions-cecil-phillip/ 608c3e3df4327a003ba2fe8f Sat, 20 Feb 2021 18:00:00 -0600 This is the full interview from my discussion with Cecil Phillip in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today!

I recently had a chance to catch up with Cecil Phillip, a Senior Cloud Advocate for Microsoft. He’s a busy guy: you might have seen him co-hosting The .NET Docs Show or the ON.NET Show. Cecil also does a lot with with microservices and distributed systems. I wanted to pick his brain on topics like Dapr, YARP, and Service Fabric. I hope you enjoy it.

I’d love to hear about how you got in this field and ended up at Microsoft.

I’ll try to give you a condensed version. Back in Antigua, we didn’t have computers in school at the time. One day, my dad brought home a Compaq Presario to upgrade from using a typewriter that he used for working on his reports. Initially, I wasn’t allowed to “experiment” with the machine, so I’d explore it when he wasn’t around. I was so curious about what this machine could do, I wanted to know what every button did—and eventually, I got better at it than he was.

I went to college and got my CS degrees. I didn’t know I wanted to be a programmer. I didn’t know many examples of what CS folks did at the time. I got my first job working on ASP.NET Web Forms applications. After I got my green card, I spent a few years exploring different industries like HR, Finance, and Education. I taught some university courses in the evenings, and it was at that point I realized how much I loved teaching kids. Then one day, I saw a tweet about a new team at Microsoft, and I thought, “Why not?” I didn’t think I’d get the job, but I’d give it a try.

When it comes to microservices, there’s been a lot of talk about promoting the idea of “loosely coupled monoliths” when the overhead of microservices might not be worth it. What are your thoughts?

Somewhere along the way, having your application get labeled as a monolith became a negative thing. In my opinion, we’ve learned a lot of interesting patterns that we can apply to both monoliths and microservices. For some solutions and teams, having a monolith is the right option. We can now pair that with the additional benefits of modern patterns to get some additional resiliency for a stable application.

I’ve been reading and learning about YARP, a .NET reverse proxy. Why would I use this over something like nginx?

If you’re a .NET developer and want the ability to write custom rules or extensions for your reverse proxy/load balancer using your .NET expertise, then YARP is a great tool.

Azure Front Door seems reverse-proxyish—why should I choose YARP over this?

Azure Front Door is a great option if you’re running large-scale applications across multiple regions. It’s a hosted service with load balancing features, so you get things like SSL management, layer-7 routing, health checks, URL rewriting, and so on. YARP, on the other hand, is middleware for ASP.NET Core that implements some of those same reverse proxy concerns, but you’re responsible for your infrastructure and have to write code to enable your features. YARP would be more akin to something like HAProxy or nginx.

A lot of enterprises, like mine, use nginx for an AKS ingress controller. Will YARP ever help with something like that?

Maybe. 🙂

I know you’ve also been working with Dapr. I know it simplifies event management for microservices, but can you illustrate how it can help .NET developers and what pain points it solves?

The biggest selling point for Dapr is that it provides an abstraction over common needs for microservice developers. In the ecosystem today, there are tons of options to choose from to get a particular thing done— but with that comes tool-specific SDKs, APIs, and configuration. Dapr allows you to choose the combination of tools you want without needing to have your code rely on tool-specific SDKs. That includes things like messaging, secrets, service discovery, observability, actors, and so on.

How does Dapr compare with something like Service Fabric? Is there overlap, and do you see Dapr someday replacing it?

I look at Dapr and Service Fabric as two completely different things. Dapr is an open-source, cross-platform CLI tool that provides building blocks for making microservice development easier. Service Fabric is a hosted Azure service that helps with packaging and deploying distributed applications. Dapr can run practically anywhere—on any cloud, on-premises, and Kubernetes. You can even run it on Service Fabric clusters since Service Fabric can run processes. It can run containers and even contains a programming model that applications can use as a hard dependency. Dapr itself is a process that would run beside your application’s various instances to provide some communication abstractions.

What is your one piece of programming advice?

I think it’s important to spent time reading code. We should actually read more code than we write. There’s so we can learn and be inspired by exploring the techniques used by the developers that have come before us.

You can connect with Cecil Phillip on Twitter.

]]>
<![CDATA[ The .NET Stacks #37: 😲 When your private NuGet feed isn't so private ]]> https://www.daveabrock.com/2021/02/20/dotnet-stacks-37/ 608c3e3df4327a003ba2fe8e Fri, 19 Feb 2021 18:00:00 -0600 Good morning to you. Is it summer yet? This week, I’m trying a new format: an in-depth topic, a lot of little things I’ve found, and, of course, the links. I appreciate any feedback you have.

  • One big thing: A supply chain attack that should scare any company
  • The little things: Container updates, typed exceptions, Blazor REPL
  • Last week in the .NET world

One big thing: A supply chain attack that should scare any company

When installing packages—whether over NPM, Python, and for us, NuGet—you’re happy not to have to deal with the complexities of dependency management personally. When you do this, you’re trusting publishers to install packages and code on your machine. When you install a package, do you ever wonder if this is open to exploits? It certainly is.

This week, Alex Birsan wrote about what he’s been working on for the last 6-7 months: hacking into dozens of companies—using a supply chain attack (or, as he calls it, dependency confusion). The ethical hack stems from a finding at Paypal last year, where Justin Gardner found that PayPal included private dependencies in a package.json hosted on GitHub. Those dependencies didn’t exist in the public NPM registry then, begging the question: what happens if you upload malicious code to NPM under those package names? Will the code get installed on PayPal-owned servers?

It isn’t just PayPal:

Apparently, it is quite common for internal package.json files, which contain the names of a javascript project’s dependencies, to become embedded into public script files during their build process, exposing internal package names. Similarly, leaked internal paths or require() calls within these files may also contain dependency names. Apple, Yelp, and Tesla are just a few examples of companies who had internal names exposed in this way.

It worked like a charm. The .NET Stacks is a family newsletter, so I’ll channel The Good Place: holy forking shirtballs.

From one-off mistakes made by developers on their own machines, to misconfigured internal or cloud-based build servers, to systemically vulnerable development pipelines, one thing was clear: squatting valid internal package names was a nearly sure-fire method to get into the networks of some of the biggest tech companies out there, gaining remote code execution, and possibly allowing attackers to add backdoors during builds.

I know what you’re thinking: JavaScript is insecure. Is this where I pretend to be shocked? The hack had impacts on .NET Core as well:

Although this behavior was already commonly known, simply searching GitHub for –extra-index-url was enough to find a few vulnerable scripts belonging to large organizations — including a bug affecting a component of Microsoft’s .NET Core.

From the .NET side, Barry Dorrans noted:

Barry explains

So Microsoft fixed their problem. What about us? Microsoft has developed a whitepaper for using private feeds. Their three suggestions for NuGet feeds:

  • Reference a single private feed, not multiple, by using a single <add/> entry for a private feed in nuget.config, and a <clear /> entry to remove inherited configuration
  • Use controlled scopes using an ID prefix to restrict uploads to the public gallery
  • Use a packages.lock.json file to validate packages have not changed using version pinning and integrity checking.

If you manage private NuGet feeds, make sure to heed the advice.


The little things: Container updates, typed exceptions, Blazor REPL

If you’re like me, pulling containers from the Microsoft container registry is a simple process. I pull the images and move on. This week, Rich Lander outlined the complexities the team encounters. How hard can managing Dockerfiles be? When it comes to supporting a millionish pulls a month, it can be. A lot of the issues come with tracking potential vulnerabilities and how to manage them. He also offers some tips we can use: like rebuilding images frequently, reading CVE reports, and so on. It’s a long one but worth a read if you work with .NET container images.

On a related note, this week Mark Heath wrote an excellent post around Docker tooling for Visual Studio 2019.

In this week’s Entity Framework community standup, the team talked with Giorgi Dalakishvili about his EntityFramework.Exceptions project, which brings typed exceptions to Entity Framework Core. The project gets past digging into DbUpdateException to find your exact issue—whether it’s constraints, value types, or missing required values.

With EntityFramework.Exceptions, you can configure the DbContext to throw different exceptions, like UniqueConstraintException or CannotInsertNullException. It’s another great project where we wonder why it isn’t baked in, but happy it’s here.

Have you heard of the Blazor REPL project?

Blazor REPL is a platform for writing, compiling, executing and sharing Blazor components entirely in the browser. It’s perfect for code playground and testing. It’s fast and secure. The platform is built and is running entirely on top of Blazor WASM - the WebAssembly hosting model of Blazor.

You can write and run components right in the client. It’s perfect for you to test component libraries before you bring them into your project—the MudBlazor library has a site for this at try.mudblazor.com.

This week, the Blazor REPL project teased what’s coming—the ability to download NuGet packages and save public snippets.

ASP.NET Core architect David Fowler showed off some underrated .NET APIs for working with strings. I learned quite a bit, as I typically blindly use the same string methods all the time:

I recently learned about github1s.com, which allows you to browse code on GitHub with VS Code embedded in a browser for you. While you’re browsing a GitHub repo, change github.com to github1s.com to see it in action. Very nice.


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Build a Blazor 'Copy to Clipboard' component with a Markdown editor ]]> https://www.daveabrock.com/2021/02/18/copy-to-clipboard-markdown-editor/ 608c3e3df4327a003ba2fe8d Wed, 17 Feb 2021 18:00:00 -0600 I recently built a quick utility app for the day job, where I used a simple Markdown previewer with a Copy to Clipboard button. I use the button to notify if the copy is successful. Then, I return the button to its original state.

Here’s how the app looks when it works correctly.

A successful copy

And here’s how it looks when the copy fails.

A failed copy

We’ll build a component that allows users to copy and paste text from a markdown previewer. This process involves three steps:

  • Implement a ClipboardService
  • Create a shared CopyToClipboardButton component
  • Use the component with a markdown previewer

The code is out on GitHub if you’d like to play along.

Implement a ClipboardService

To write text to the clipboard, we’ll need to use a browser API. This work involves some quick JavaScript, whether from a pre-built component or some JavaScript interoperability. Luckily for us, we can create a basic ClipboardService that allows us to use IJsRuntime to call the Clipboard API, which is widely used in today’s browsers.

We’ll create a WriteTextAsync method that takes in the text to copy. Then, we’ll write the text to the API with a navigator.clipboard.writeText call. Here’s the code for Services/ClipboardService.cs:

using System.Threading.Tasks;
using Microsoft.JSInterop;

namespace ClipboardSample.Services
{
    public class ClipboardService
    {
        private readonly IJSRuntime _jsRuntime;

        public ClipboardService(IJSRuntime jsRuntime)
        {
            _jsRuntime = jsRuntime;
        }

        public ValueTask WriteTextAsync(string text)
        {
            return _jsRuntime.InvokeVoidAsync("navigator.clipboard.writeText", text);
        }
    }
}

Then, in Program.cs, reference the new service we created:

builder.Services.AddScoped<Services.ClipboardService>();

With that out of the way, let’s create the CopyToClipboardButton component.

Create a shared CopyToClipboardButton component

To create a shared component, create a CopyToClipboardButton component in your project’s shared directory.

Thanks to Gérald Barré for his excellent input on working with button state. Check out his site for great ASP.NET Core and Blazor content.

At the top of the file, let’s inject our ClipboardService. (We won’t need a @page directive since this will be a shared component and not a routable page.)

@inject ClipboardService ClipboardService

Now, we’ll need to understand how the button will look. For both the active and notification states, we need to have the following:

  • Message to display
  • Font Awesome icon to display
  • Bootstrap button class

With that in mind, let’s define all those at the beginning of the component’s @code block. (This could be in a separate file, too, if you wish.)

@code {
    private const string _successButtonClass = "btn btn-success";
    private const string _infoButtonClass = "btn btn-info";
    private const string _errorButtonClass = "btn btn-danger";

    private const string _copyToClipboardText = "Copy to clipboard";
    private const string _copiedToClipboardText = "Copied to clipboard!";
    private const string _errorText = "Oops. Try again.";

    private const string _fontAwesomeCopyClass = "fa fa-clipboard";
    private const string _fontAwesomeCopiedClass = "fa fa-check";
    private const string _fontAwesomeErrorClass = "fa fa-exclamation-circle";

With that, we need to include a Text property as a component parameter. The caller will provide this to us, so we know what to copy.

[Parameter] 
public string Text { get; set; }

Through the joy of C# 9 records and target typing, we can create an immutable object to work with the initial state.

record ButtonData(bool IsDisabled, string ButtonText,
        string ButtonClass, string FontAwesomeClass);

ButtonData buttonData = new(false, _copyToClipboardText,
                      _infoButtonClass, _fontAwesomeCopyClass);

Now, in the markup, we can add a new button with the properties we defined.

<button class="@buttonData.ButtonClass" disabled="@buttonData.IsDisabled" 
        @onclick="CopyToClipboard">
    <i class="@buttonData.FontAwesomeClass"></i> @buttonData.ButtonText
</button> 

You’ll get an error because your editor doesn’t know about the CopyToClipboard method. Let’s create it.

First, set up an originalData variable that holds the original state, so we have it when it changes.

var originalData = buttonData;

Now, we’ll do the following in a try/catch block:

  • Write the text to the clipboard
  • Update buttonData to show it was a success/failure
  • Call StateHasChanged
  • Wait 1500 milliseconds
  • Return buttonData to its original state

We need to explicitly call StateHasChanged to notify the component it needs to re-render because the state … has changed.

Here’s the full CopyToClipboard method (along with a TriggerButtonState private method for reusability).

public async Task CopyToClipboard()
{
    var originalData = buttonData;
    try
    {
        await ClipboardService.WriteTextAsync(Text);

        buttonData = new ButtonData(true, _copiedToClipboardText,
                                    _successButtonClass, _fontAwesomeCopiedClass);
        await TriggerButtonState();
        buttonData = originalData;
    }
    catch
    {
        buttonData = new ButtonData(true, _errorText,
                                    _errorButtonClass, _fontAwesomeErrorClass);
        await TriggerButtonState();
        buttonData = originalData;
    }
}

private async Task TriggerButtonState()
{
    StateHasChanged();
    await Task.Delay(TimeSpan.FromMilliseconds(1500));
}

For reference, here’s the entire CopyToClipboardButton component:

@inject ClipboardService ClipboardService

<button class="@buttonData.ButtonClass" disabled="@buttonData.IsDisabled" 
        @onclick="CopyToClipboard">
    <i class="@buttonData.FontAwesomeClass"></i> @buttonData.ButtonText
</button> 
<br/><br />

@code {
    private const string _successButtonClass = "btn btn-success";
    private const string _infoButtonClass = "btn btn-info";
    private const string _errorButtonClass = "btn btn-danger";

    private const string _copyToClipboardText = "Copy to clipboard";
    private const string _copiedToClipboardText = "Copied to clipboard!";
    private const string _errorText = "Oops. Try again.";

    private const string _fontAwesomeCopyClass = "fa fa-clipboard";
    private const string _fontAwesomeCopiedClass = "fa fa-check";
    private const string _fontAwesomeErrorClass = "fa fa-exclamation-circle";

    [Parameter] 
    public string Text { get; set; }

    record ButtonData(bool IsDisabled, string ButtonText,
        string ButtonClass, string FontAwesomeClass);

    ButtonData buttonData = new(false, _copyToClipboardText,
                      _infoButtonClass, _fontAwesomeCopyClass);

    public async Task CopyToClipboard()
    {
        var originalData = buttonData;
        try
        {
            await ClipboardService.WriteTextAsync(Text);

            buttonData = new ButtonData(true, _copiedToClipboardText,
                                    _successButtonClass, _fontAwesomeCopiedClass);
            await TriggerButtonState();
            buttonData = originalData;
        }
        catch
        {
            buttonData = new ButtonData(true, _errorText,
                                    _errorButtonClass, _fontAwesomeErrorClass);
            await TriggerButtonState();
            buttonData = originalData;
        }
    }

    private async Task TriggerButtonState()
    {
        StateHasChanged();
        await Task.Delay(TimeSpan.FromMilliseconds(1500));
    }
}

Great! You should now be able to see the button in action. If you need help triggering a failed state, you can go back to ClipboardService and spell the JS function wrong:

return _jsRuntime.InvokeVoidAsync("navigatoooooor.clipboard.writeText", text);

That’s great, but our component doesn’t do anything meaningful. To do that, we can attach it to a Markdown previewer.

Use the component with a markdown previewer

We can now build a simple markdown previewer with the Markdig library. This allows us to convert markdown to HTML to see the final state of our rendered Markdown state.

Thanks to Jon Hilton’s excellent post, I was able to do this in minutes. I’ll quickly add the component here—if you want greater explanation or context, please visit Jon’s post (another great site for Blazor content).

After you download the Markdig package and add a @using Markdig line to your _Imports.razor, create an Editor.razor component with something like this:

@page "/clipboard"

<div class="row">
    <div class="col-6" height="100">
        <textarea class="form-control" 
        @bind-value="Body" 
        @bind-value:event="oninput"></textarea>
    </div>
    <div class="col-6">
        @((MarkupString) Preview)
    </div>
</div>

@code {
    public string Body { get; set; } = string.Empty;
    public string Preview => Markdown.ToHtml(Body);
}

Long story short, I’m adding a text-area for writing Markdown, binding to the Body text. Then as a user types (the event="oninput), use Markdig to render HTML in the other pane on the right.

All we need to do now is include our new component, and pass the Body along. Add the following just below the @page directive and before the div:

<CopyToClipboardButton 
    Text="@Body" />

As a reference, here’s the full Editor.razor file:

@page "/editor"

<CopyToClipboardButton 
    Text="@Body" />

<div class="row">
    <div class="col-6" height="100">
        <textarea class="form-control" 
        @bind-value="Body" 
        @bind-value:event="oninput"></textarea>
    </div>
    <div class="col-6">
        @((MarkupString) Preview)
    </div>
</div>

@code {
    public string Body { get; set; } = string.Empty;
    public string Preview => Markdown.ToHtml(Body);
}

That’s really all there is to it!

Wrap up

In this post, we built a reusable component to copy text to the clipboard. As a bonus, the component toggles between active and notification states. We also saw how simple it was to attach it to a Markdown previewer.

There’s more we could do: for example, can we make the component more reusable by passing in button text and states—making it a ButtonToggleState component and not just used for copying to a clipboard? Give it a shot and let me know how it goes!

]]>
<![CDATA[ The .NET Stacks #36: ⚡ Azure Functions and some Microsoft history ]]> https://www.daveabrock.com/2021/02/13/dotnet-stacks-36/ 608c3e3df4327a003ba2fe8c Fri, 12 Feb 2021 18:00:00 -0600 Welcome to another week. If you’re seeing temperatures above above 0° F, please send some my way. Talk about a cold start—Azure Functions should be jealous.

This week:

  • OpenAPI support for Azure Functions
  • Achieving inheritance with Blazor CSS isolation
  • Ignoring calls from Bill Gates

Open API support for Azure Functions

When you create a new ASP.NET Core Web API in .NET 5, you may have noticed that Open API (more commonly to you, Swagger UI) is supported out of the box—and there’s a handy check mark in Visual Studio to guide you along. You might be wondering: will Open API support come to Azure Functions?

The answer is yes. While it’s very much in preview, the team’s been talking about releasing Open API bindings for Functions. This idea isn’t anything new, but the bindings look promising.

With bindings, you’ll be able to decorate your functions with bindings like:

  • [OpenApiOperation("addStuff", "stuff")]
  • [OpenApiRequestBody("application/json", typeof(MyRequestModel))]
  • [OpenApiResponseBody(HttpStatusCode.OK, "application/json", typeof(MyResponseModel))]

I’m seeing JSON.NET decorators (hat tip to the team for not pushing System.Text.Json on us), optional metadata configuration, and security and authentication support. It’s early, so if you do try it out be patient and report issues as you come across them.

Achieving inheritance with Blazor CSS isolation

After writing about Blazor CSS isolation over the last few months I’ve received a common question: how do I achieve inheritance with CSS isolation? I wrote about it this week.

The concept is a little weird—isolation scopes CSS to your components, and “inheriting” or sharing styles across components seems to go against a lot of those advantages. Even so, in some limited scenarios, it can help maintenance when dealing with similarly grouped components.

From the post (quoting myself in my newsletter deserves a trip to the therapist, I must admit):

If we’re honest, with Blazor CSS isolation, the “C” (cascading) is non-existent. There’s no cascading going on here—isolating your styles is the opposite of cascading. With Blazor CSS isolation, you scope your CSS component styles. Then, at build time, Blazor takes care of the cascading for you. When you wish to inherit styles and share them between components, you’re losing many of the advantages of using scoped CSS.
In my opinion, using inheritance with CSS isolation works best when you want to share styles across a small group of similar components. If you have 500 components in your solution and you want to have base styles across them all, you may be creating more problems for yourself. In those cases, you should stick with your existing tools.

With those caveats aside, how do you do it? The key here is that Blazor CSS isolation is file-based, not class-based, so it doesn’t know about your object graph. This is done with grouping custom scope identifiers, which you can add to your project file. I’ve included these updates in this week’s blog post and in the official Microsoft ASP.NET Core doc.

Ignoring calls from Bill Gates

I’ve really been enjoying reading Steven Sinofsky. A former Microsoft veteran—with stories to share from building out Windows in the antitrust years—he’s writing a serialized book on Substack, Hardcore Software. It’ll be split into 15 chapters and an epilogue. In his first chapter he writes about joining Microsoft. He never returned calls from Bill Gates because he thought his friends were pranking him. When he finally did, he was met with some hilarious frankness:

“Hi, Steve, this is Bill Gates.” … “Hello. Thank you for calling, and so sorry for the confusion. I thought a friend of mine…”
“So, David gave me this list of like ten people and I’m supposed to call all of them and convince them to work at Microsoft. You should come work at Microsoft. Do you have any questions?”
“Well, why haven’t you accepted yet? You have a good offer.”
“I’m considering other things. I have been really interested in government service.”
“Government? That’s for when you’re old and stupid.”

This is a fun history lesson and I’m looking forward to reading it all.

🌎 Last week in the .NET world

🔥 The Top 4

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ How to nuke sensitive commits from your GitHub repository ]]> https://www.daveabrock.com/2021/02/09/nuke-sensitive-commits-from-github/ 608c3e3df4327a003ba2fe8b Mon, 08 Feb 2021 18:00:00 -0600 We all make mistakes. Let’s talk about one of my recent ones.

I created a GitHub repository that tweets old posts of mine, which was heavily stolen inspired by Khalid Abuhakmeh’s solution. (Yes, the repo is called TweetOldPosts.) It is scheduled through a GitHub Action.

To connect to the Tweetinvi API, I had to pass along a consumer key, consumer secret, access token, and access token secret. Like a responsible developer, I stored these as encrypted secrets in GitHub. Before I did that, though, I hardcoded these values to get it working. I pushed these changes to GitHub. Oops.

My bad commit
Mistakes were made.

After refreshing the keys—which you should do immediately, should you find yourself in the same situation— I was wondering how I could pretend this never happened. This post shows you how to delete files with sensitive data from your commit history.

To resolve this issue, you could use filter-branch, if the warning in the documentation doesn’t scare you:

git filter-branch has a plethora of pitfalls that can produce non-obvious manglings of the intended history rewrite (and can leave you with little time to investigate such problems since it has such abysmal performance). These safety and performance issues cannot be backward compatibly fixed and as such, its use is not recommended. Please use an alternative history filtering tool such as git filter-repo. If you still need to use git filter-branch, please carefully read SAFETY (and PERFORMANCE) to learn about the land mines of filter-branch, and then vigilantly avoid as many of the hazards listed there as reasonably possible.

Eek. Scary. Luckily, there’s a simpler and faster alternative: the open-source BFG Repo Cleaner. I wanted something quick and simple, because this is the only time I’ll need to do this. Stop laughing.

Remove appsettings.json from commit history

As for me, I accidentally pushed an appsettings.json file. I don’t need that file at all. So, for me, once I downloaded the BFG JAR file, my task was pretty simple.

Here’s what to do:

Clone a fresh copy of your impacted repository with the --mirror flag, like this:

git clone --mirror https://github.com/{user}/{repo}.git

The --mirror flag clones a bare repository, which is a full copy of your Git database—but it contains no working files. If you’ve ever looked into a .git folder, it’ll look familiar. Make sure you create a backup of this repository.

My bare repo

Assuming you have the Java runtime installed, run the following command. (You can also set an alias for java -jar bfg.jar to just call bfg.) This deletes the file from your history (but not the current version, although I don’t care).

java -jar bfg.jar --delete-files appsettings.json

With this command, BFG cleans my commits and branches to remove appsettings.json.

Here’s the response I get back:

 Using repo : C:\code\TweetOldPosts.git

 Found 8 objects to protect
 Found 2 commit-pointing refs : HEAD, refs/heads/master

 Protected commits
 -----------------

 These are your protected commits, and so their contents will NOT be altered:

 * commit xxxxxxxx (protected by 'HEAD')

 Cleaning
 --------

 Found 17 commits
 Cleaning commits:       100% (17/17)
 Cleaning commits completed in 182 ms.

 Updating 1 Ref
 --------------

         Ref                 Before     After
         ---------------------------------------
         refs/heads/master | xxxxxxxx | xxxxxxxx

 Updating references:    100% (1/1)
 ...Ref update completed in 14 ms.

 Commit Tree-Dirt History
 ------------------------

         Earliest      Latest
         |                  |
         .Dm Dmmmmm mmmmm mmm

         D = dirty commits (file tree fixed)
         m = modified commits (commit message or parents changed)
         . = clean commits (no changes to file tree)

                                 Before     After
         -------------------------------------------
         First modified commit | xxxxxxxx | xxxxxxxx
         Last dirty commit     | xxxxxxxx | xxxxxxxx

 Deleted files
 -------------

         Filename           Git id
         -----------------------------------
         appsettings.json | xxxxxxxx (308 B)


 In total, 20 object ids were changed. Full details are logged here:

         C:\code\TweetOldPosts.git.bfg-report\2021-02-09\21-26-32

While the Git data is updated, nothing has been deleted yet. After you confirm the repo is updated correctly, run the following two commands to do so. Again, make sure you have a backup.

 git reflog expire --expire=now --all
 git gc --prune=now --aggressive

Finally, do a git push to push your changes up to GitHub. My file is no longer in my commit history. You’re ready to do a fresh git clone and pretend that it never happened (unless you decided to write about it).

Resources

]]>
<![CDATA[ The .NET Stacks #35: 🔑 Nothing is certain but death and expiring certificates ]]> https://www.daveabrock.com/2021/02/06/dotnet-stacks-35/ 608c3e3df4327a003ba2fe8a Fri, 05 Feb 2021 18:00:00 -0600 Welcome to another week, everybody. I’ve been getting some complaints; I haven’t had a bad joke in a while. I know it’d be a nice shot in the arm, but now’s not the time to be humerus.

We’re covering a lot of little things this week, so let’s get started.


This week, the .NET team announced improvements to the new Razor editor in Visual Studio (obviously in response to my complaints last week).

Six months ago, the team announced a preview of an experimental Razor editor based on a common Razor language server. (We talked about it in Issue #9, if you want all the specifics.) It’s now available in the latest Visual Studio preview (16.9 version 3).

The new Razor editor allows the team to enable C# code actions more easily—with this update, Razor can help you discover using statements and null checks. This also extends to Blazor. Check out the blog post for details (and you can fill out a survey about syntax coloring.)


The NuGet team needed some #HugOps this week as they dealt with an expired certificate that temporarily broke .NET 5 projects running on Debian Linux.

Here’s what the NuGet team had to say:

This is because of an issue reported in the ca-certificates package in which the root CA being used are not trusted and must be installed manually due to Symantec Issues. Debian removed the impacted Symantec certificate from their ca-certificates package in buster (Debian 10) impacting all user … The Debian patch was too broad in removing the certificate for all uses and created the error messages seen today.

For some good news, the NuGet website has a new Open in FuGet Package Explorer link that allows you to explore packages with FuGet.


This week, I wrote about Signed HTTP Exchanges. In our interview with Blazor creator Steve Sanderson, he mentioned it as a way to potentially help speed up runtime loading for Blazor WebAssembly applications by using it as a type of cross-site CDN cache.

Here’s what he told us:

It’s conceivable that new web platform features like Signed HTTP Exchanges could let us smartly pre-load the .NET WebAssembly runtime in a browser in the background (directly from some Microsoft CDN) while you’re visiting a Blazor WebAssembly site, so that it’s instantly available at zero download size when you go to other Blazor WebAssembly sites. Signed HTTP Exchanges allow for a modern equivalent to the older idea of a cross-site CDN cache. We don’t have a definite plan about that yet as not all browsers have added support for it.

It isn’t a sure thing because the technology is still quite new, but it’s a promising idea to overcome the largest drawback of using Blazor WebAssembly.


I’ve written about prerendering Blazor WebAssembly apps. It’s a little weird, as you give up the ability to host your app as static files. Why not host over Blazor Server, then?

This week, Andrew Lock wrote about a creative approach where you can prerender an app to static files without a host app. It does have tradeoffs—you need to define routes beforehand and it doesn’t work well if you’re working with dynamic data—but it’s worth checking out if it fits your use case.


At the Entity Framework community standup this week, they discussed the MSBuild.Sdk.SqlProj project with Jonathan Mezach. Already sitting at 42k downloads from GitHub, it’s an SDK that produces .dacpac files from SQL scripts that can be deployed using SqlPackage.exe or dotnet publish. It’s like SQL Server Data Tools but built with the latest tooling.

You can check out Jonathan’s announcement on his blog to understand how it all works and his motivations for building it.


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ How to achieve style inheritance with Blazor CSS isolation ]]> https://www.daveabrock.com/2021/01/31/blazor-css-iso-inheritance-scopes/ 608c3e3df4327a003ba2fe89 Sat, 30 Jan 2021 18:00:00 -0600 Are you familiar with Blazor CSS isolation? Last year, when it was released, I wrote an introductory post and the official ASP.NET Core doc. If you want a quick primer, here goes: Blazor CSS isolation allows you to scope styles to a specific component. With CSS isolation, you can package your styles to your components, and you also don’t need to worry about the inevitable headaches when working with vendor styles and global CSS.

As I’ve spoken to a few user groups about CSS isolation, I always get the same question: how can I use this to inherit styles between components? Good question. This post will show you how.

When I say inherit styles, I’m not referring to how you can pass the same styles down to child components, such as when you want all of a component’s children to have the same h1 style declaration. I’m referring to when you want a base set of styles shared across a set of components. You might be doing this without CSS isolation. For example, if you define most of your styles using a library like Bootstrap, you lean on the library for most of your styles—then you have a custom stylesheet (like custom.css) that overrides those when appropriate.

If we’re honest, with Blazor CSS isolation, the “C” (cascading) is non-existent. There’s no cascading going on here—isolating your styles is the opposite of cascading. With Blazor CSS isolation, you scope your CSS component styles. Then, at build time, Blazor takes care of the cascading for you. When you wish to inherit styles and share them between components, you’re losing many of the advantages of using scoped CSS.

In my opinion, using inheritance with CSS isolation works best when you want to share styles across a small group of similar components. If you have 500 components in your solution and you want to have base styles across them all, you may be creating more problems for yourself. In those cases, you should stick with your existing tools.

With that knowledge in place, let’s understand how to use inheritance with Blazor CSS isolation.

Inherit styles using CSS isolation

I’m using a newly created Blazor WebAssembly app (it’ll work for Blazor Server, too). If you’ve worked with Blazor, it’s from the template with Index, Counter, and FetchData components. To create an app, run this from the dotnet CLI and run it to confirm it works:

dotnet new blazorwasm -o "CSSIsoInheritance"
cd blazorwasm
dotnet run

In your Pages folder, create the following files:

  • BaseComponent.razor
  • BaseComponent.razor.css
  • FetchData.razor.css
  • Counter.razor.css

In BaseComponent.razor.css we’ll share an h1 and p style across your components.

h1 {
    color: red;
}

p {
    text-decoration-line: underline;
}

Your key to inheritance: scope identifiers

As mentioned previously, Blazor CSS isolation is a compile-time, file-based solution. Out of the box, it’ll look for a MyComponent.razor.css file that matches a MyComponent.razor file. At compile time, it’ll assign a scope identifier to your elements. It’s assigned for you and looks like this (your identifier will differ):

A lot of unused styles

You can’t expect the solution to know about inherited classes. I say this repeatedly because it’s important: Blazor CSS isolation is a file-based solution. To accomplish inheritance, you apply a custom scope identifier across your impacted components. From your project file, you can add the following <ItemGroup>:

<ItemGroup>
    <None Update="Pages/BaseComponent.razor.css" CssScope="inherit-scope" />
    <None Update="Pages/Counter.razor.css" CssScope="inherit-scope" />
    <None Update="Pages/FetchData.razor.css" CssScope="inherit-scope" />
</ItemGroup>

To save some lines, you can also use the wildcard * operator to include any files that end with razor.css:

<ItemGroup>
    <None Update="Pages/*.razor.css" CssScope="inherit-scope" />
</ItemGroup>

If you browse to the Counter and FetchData components, you’ll see the styles are applied. Here’s how the Counter component looks.

The inheritance works

If you open your Developer Tools, you’ll see the scope identifier is applied correctly. If you’re wondering, the b-lib7l0qa43 identifier belongs to the MainLayout.razor file.

The inheritance works

Since we didn’t apply a custom scope identifier to our Index component, it’s left alone.

The inheritance works

Solution drawbacks

As I mentioned before, you should do this carefully and cautiously, so you don’t introduce more headaches. Aside from that, the solution isn’t elegant, because:

  • Every derived component must have a razor.css file, even if it’s empty
  • In our case (and in many cases), the base component markup is empty
  • You need to explicitly assign an identifier in your project file (or name impacted components similarly)

Out of the box, the solution is meant for you to isolate and not share styles. However, if this fits your use case and makes sense to you, you can use this to lower your maintenance costs.

Wrap up

In this post, we talked about how to use inheritance with CSS isolation. We talked about how to implement inheritance as well as the benefits and drawbacks. As always, feel free to share your experiences in the comments, and check out the references below for more details on Blazor CSS isolation.

]]>
<![CDATA[ The .NET Stacks #34: 🎙 Visual Studio gets an update, and you get a rant ]]> https://www.daveabrock.com/2021/01/30/dotnet-stacks-34/ 608c3e3df4327a003ba2fe88 Fri, 29 Jan 2021 18:00:00 -0600 Happy Monday to you all. I hope you had a good week. Put on your comfy mittens and let’s get started.

  • Visual Studio gets an update, and you get a rant
  • EF Core 6 designs take shape
  • GitHub Pages gets more enterprise-y
  • A quick correction

Visual Studio gets an update, and you get a rant

The Visual Studio team has released v16.9 Preview 3. This one’s a hodgepodge, but a big call out is the ability to view source generators from the Solution Explorer. This allows you to inspect the generated code easily. People are already getting excited.

Since we’re on the subject, Jerry Nixon polled some folks this week about their preferred IDE. (I say VS Code is only an IDE when you install extensions, but I digress.) As of Sunday night, JetBrains Rider comes in at around 26%. It might seem low to you, but not to me. After years of Windows-only .NET making Visual Studio the only realistic IDE, it’s amazing that a third-party tool is gaining so much traction. This recognition is well-deserved, by the way—it’s fast, feature-rich, and does so much out-of-the-box that Visual Studio doesn’t. I installed it this week after thinking about it for years, and am impressed so far.

Of course, your mileage may vary and the “new and different” is always alluring at first. But, to me, what’s more troubling is my reason for giving it a try. I wouldn’t be offended if you called me a Visual Studio fanboy. I’ve used it for over a decade and a half. Even so: the Razor editing experience is a pain (with the admission they are rolling out a new editor), and component discovery is a hassle when working with Blazor, and closing and restarting Visual Studio is still a thing, in 2021. This is part of a general theme (and I’m not the only one): why is Blazor tooling in Visual Studio still so subpar? I’ve found Rider to provide a better Blazor development experience. Is it antecdotal? Yes. Do I think it’s just me? No.

I know it’s being worked on, I get it, but: when you deliver a wonderful, modern library with hype that hasn’t been seen in years … how is your flagship IDE not providing a development experience to match? Whether it’s internal priorities or a million other reasons that impact a large corporation, the expectation that Visual Studio users will have to wait and deal with it is suitable when customers don’t have a choice. They do now, and a non-Microsoft tool is consistently beating them on the Blazor development experience. I’m disappointed.


EF Core 6 designs take shape

This week, Jeremy Likness shared the high-level, subject-to-change plan for EF Core 6—to be released with .NET 6 in November. It’ll target .NET 6, of course, won’t run on .NET Framework and likely will not support any flavor of .NET Standard.

Some big features make the list, like SQL Server temporal tables (allowing you to create them from migrations and the ability to access historical data), JSON columns, compiled models, and a plan to match Dapper performance. The latter definitely caught my eye. As David Ramel writes this week, he quotes Jeremy as saying: “This is a significant challenge which will likely not be fully achieved. Nevertheless, we will get as close as we can.”

In other EF news, the team released a new doc on EF Core change tracking.


GitHub Pages gets more enterprise-y

As Microsoft is trying to make sense of competing workloads with GitHub and consolidate products–we’ve talked a few times about Azure DevOps and GitHub Actions—look for things in GitHub to get a little more enterprise-y.

This week, GitHub released something called “Access Control for GitHub Pages”, rolled out to the GitHub Enterprise Cloud. This gives you the option to limit access of GitHub Pages to users with repository access. This is ideal for documentation and knowledge bases, or any other static site needs.


A quick correction

There is no magic “undo” command with newsletters, sadly. I’d like to make a correction from last week.

I hope you enjoyed last week’s interview with Steve Sanderson, the creator of Blazor. I made a mistake in the introduction. Here’s what I originally wrote:

It seems like forever ago when, at NDC Oslo in 2017, Steve Sanderson talked about a fun project he was working on, called .NET Anywhere. In the demo, he was able to load and run C# code—ConsoleApp1.dll, specifically—in the browser, using Web Assembly. C# in the browser! In the talk, he called it “an experiment, something for you to be amused by.”

I made the implication that Steve created the Dot Net Anywhere (DNA) runtime. With apologies to Chris Bacon, not true!

Here’s what I should have said:

It seems like forever ago when, at NDC Oslo in 2017, Steve Sanderson showed off a new web UI framework with the caveat: “an experiment, something for you to be amused by.” By extending Dot Net Anywhere (DNA), Chris Bacon’s portable .NET runtime, on WebAssembly, he was able to load and run C# in the browser. In the browser!

Thanks to Steve for the clarification.


🌎 Last week in the .NET world

🔥 The Top 4

📢 Announcements

📅 Community and events

🌎 Web development

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🏗 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Signed HTTP Exchanges: A path for Blazor WebAssembly instant runtime loading? ]]> https://www.daveabrock.com/2021/01/26/signed-http-exchanges-cdn-cache/ 608c3e3df4327a003ba2fe87 Mon, 25 Jan 2021 18:00:00 -0600 Before you sit down and write the next great Blazor application, you first need to think about hosting: Blazor Server or Blazor WebAssembly? I’ve written about this quite a bit, but I’ll provide a condensed version for you.

Blazor Server executes your app on the server from an ASP.NET Core app. JS calls, event handling, and UI updates are managed over a persistent SignalR connection. In this case, the app loads much faster, you can enjoy .NET Core APIs on the server, and your apps work with browsers that don’t support WebAssembly.

For Blazor WebAssembly, there’s no server-side dependency, you can leverage client capabilities, and it works great for serverless deployment scenarios. For example, in my Blast Off with Blazor series, it’s served using Blazor WebAssembly and Azure Functions—I only have to pay for compute.

The biggest drawback to Blazor WebAssembly, of course, is the runtime download size. With no runtime dependency, users need to wait for the .NET runtime to load in the browser. While it’s getting smaller thanks to IL trimming and other techniques, it still is a major factor to consider. You can definitely help with initial load time—thanks to server prerendering and other techniques—but the download size will always be a factor. Or will it?

In a recent interview with Blazor creator Steve Sanderson, I posed this question: do we see a point where it’ll ever be as lightweight as leading front-end frameworks, or will we need to understand it’s a cost that comes with a full framework in the browser?

Here’s what he said:

The size of the .NET runtime isn’t ever going to reduce to near-zero, so JS-based microframeworks (whose size could be just a few KB) are always going to be smaller. We’re not trying to win outright based on size alone—that would be madness. Blazor WebAssembly is aimed to be maximally productive for developers while being small enough to download that, in very realistic business app scenarios, the download size shouldn’t be any reason for concern.
That said, it’s conceivable that new web platform features like Signed HTTP Exchanges could let us smartly pre-load the .NET WebAssembly runtime in a browser in the background (directly from some Microsoft CDN) while you’re visiting a Blazor WebAssembly site, so that it’s instantly available at zero download size when you go to other Blazor WebAssembly sites. Signed HTTP Exchanges allow for a modern equivalent to the older idea of a cross-site CDN cache. We don’t have a definite plan about that yet as not all browsers have added support for it.

Interesting! I’ve read about Signed HTTP Exchanges before but not in the context of a cross-site CDN cache. In this post, I’d like to unpack the concept of a cross-site CDN cache, describe Signed HTTP Exchanges, and how this new technology might possibly help with downloading the .NET runtime in the browser for Blazor WebAssembly apps.

To be clear, I’m not saying that Microsoft plans on using Signed HTTP Exchanges to assist with Blazor WebAssembly browser load time. I’m exploring the technology here to understand how it all works.

The value of a “cross-site CDN cache”

Before we discuss the values of a cross-site CDN cache, let’s briefly discuss a few big reasons why you might want to use a Content Delivery Network (CDN) to store your resources instead of handling it yourself:

  • Better content availability - because CDNs are typically distributed worldwide, they can handle more traffic easily. You can trust that something hosted by a Google or a Microsoft will withstand the pressures of the modern web.
  • Faster load times -  because CDNs are typically globally distributed, you can serve content to users closer to their location
  • Cost reduction - through caching and other optimization techniques, CDNs can reduce the data footprint

It’s the caching we want to focus on here. How does this relate to cross-site caching? This is definitely not a new idea. Let’s think about the jQuery library. (While it isn’t as popular as it once was, it’s still all over the web, including in your latest ASP.NET Core project templates.)

The idea of cross-site CDN caching is simple. If folks all over the world are accessing a ubiquitous library like jQuery from its official CDN, the odds are high that they already have a cached script file in their browser from a visit to another website. This significantly speeds up script load time when users make their way to your site.

Will Blazor ever be as ubiquitous across the web as jQuery once was? Time will tell. But it’s a reasonable strategy to address how to load the .NET runtime in the browser. It isn’t as simple as asking Microsoft to stand up a new CDN, as this pattern invites tracking abuse and potential security problems. We need a modern approach. Let’s see if Signed HTTP Exchanges can help.

Enter Signed HTTP Exchanges

Here’s an elevator pitch for Signed HTTP Exchanges, taken straight from Google:

Signed HTTP Exchange (or “SXG”) … enables publishers to safely make their content portable, i.e. available for redistribution by other parties, while still keeping the content’s integrity and attribution. Portable content has many benefits, from enabling faster content delivery to facilitating content sharing between users, and simpler offline experiences.

This especially helps with serving content from third-party caches. If Microsoft has a CDN to host the Blazor WebAssembly runtime bits, they could leverage SXG to make this possible.

How does this work? When a publisher signs an HTTP exchange (a request/response pair) from any cached server, the content can be published and referenced elsewhere on the web without a dependency on a server, connection, or even a hosting service. Feel free to geek out on the juicy details. (If you’re a publisher, you enable this by generating a certificate key to generate a signature with a special CanSignHttpExchanges extension.)

As the main use case is delivering a page’s main document, sites could leverage it by using an a or link tag:

<a href="https://myexample.com/sxg">
<link rel="prefetch" as="document" href="https://myexample.com/sxg">

In our case, Microsoft could use SXG preloading to download the runtime in the background, cache it, then allowing for a lightning-quick experience as you navigate other Blazor WebAssembly sites. As we think about how we load resources today, it’s done over a <script> tag. This approach in SXG—called subresource loading—is currently not recommended, but it may change over time.

Browser support

Over at caniuse.com, you can see how browser support looks—at the time of this writing it enjoys around 67% support for users worldwide. Edge, Chrome, and Opera rolled out support for it around the spring of 2020, and Firefox does not support it yet.

Browser support for SXG

In addition to this, SXGs support advanced content negotiation. This means you can serve both SXG and non-SXG versions of the same content, depending on if a browser supports it.

Wrap up

In this post, we talked about Signed HTTP Exchanges and how it might help with loading the .NET runtime in the browser. It’s always scary writing about a new topic that’s subject to change—so please let me know if you have any suggestions or corrections.

]]>
<![CDATA[ The .NET Stacks #33: 🚀 A blazing conversation with Steve Sanderson ]]> https://www.daveabrock.com/2021/01/23/dotnet-stacks-33/ 608c3e3df4327a003ba2fe86 Fri, 22 Jan 2021 18:00:00 -0600 Happy Monday, all. What did you get NuGet for its 10th birthday?

This week:

  • Microsoft blogs about more .NET 5 improvements
  • A study on migrating a hectic service to .NET Core
  • Meet Jab, a new compile-time DI library
  • Dev Discussions: Steve Sanderson
  • Last week in the .NET world

Microsoft blogs about more .NET 5 improvements

This week, Microsoft pushed a few more blog posts to promote .NET 5 improvements: Sourabh Shirhatti wrote about diagnostic improvements, and Máňa Píchová writes about .NET networking improvements.

Diagnostic improvements

With .NET 5, the diagnostic suite of tools does not require installing them as .NET global tools—they can now be installed without the .NET SDK. There’s now a single-file distribution mechanism that only requires a runtime of .NET Core 3.1 or higher. You can check out the GitHub repo to geek out on all the available diagnostics tools. In other news, you can now perform startup tracing from EventPipe as the tooling can now suspend the runtime during startup until a tool is connected. Check out the blog post for the full treatment.

Networking improvements

In terms of .NET 5 networking improvements, the team added the ability to use cancellation timeouts from HttpClient without the need for a custom CancellationToken. While the client still throws a TaskCanceledException, the inner exception is a TimeoutException when timeouts occur. .NET 5 also supports multiple connections with HTTP/2, a configurable ping mechanism, experimental support for HTTP/3, and various telemetry improvements. Check out the networking blog post for details. It’s a nice complement to Stephen Toub’s opus about .NET 5 performance improvements.


A study on migrating a hectic service to .NET Core

This week, Avanindra Paruchuri wrote about migrating the Azure Active Directory gateway—and its 115 billion daily requests—over to .NET Core. While there’s nothing preventing you hosting .NET Framework apps in the cloud, the bloat of the framework often leads to expensive cloud spend.

The gateway’s scale of execution results in significant consumption of compute resources, which in turn costs money. Finding ways to reduce the cost of executing the service has been a key goal for the team behind it. The buzz around .NET Core’s focus on performance caught our attention, especially since TechEmpower listed ASP.NET Core as one of the fastest web frameworks on the planet.
In Azure AD gateway’s case, we were able to cut our CPU costs by 50%. As a result of the gains in throughput, we were able to reduce our fleet size from ~40k cores to ~20k cores (50% reduction) … Our CPU usage was reduced by half on .NET Core 3.1 compared to .NET Framework 4.6.2 (effectively doubling our throughput).

It’s a nice piece on how they were able to gradually move over and gotchas they learned along the way.


Meet Jab, a new compile-time DI library

This week, Pavel Krymets introduced Jab, a library used for compile-time dependency injection. Pavel works with the Azure SDKs and used to work on the ASP.NET Core team. Remember a few weeks ago, when we said that innovation in C# source generators will be coming in 2021? Here we go.

From the GitHub readme, it promises fast startup (200x more than Microsoft.Extensions.DependencyInjection), fast resolution (a 7x improvement), no runtime dependencies, with all code generating during project compilation. Will it run on ASP.NET Core? Not likely, since ASP.NET Core is heavily dependent on the runtime thanks to type accessibility and dependency discovery, but Pavel wonders if there’s a middle ground.


Dev Discussions: Steve Sanderson

It seems like forever ago when, at NDC Oslo in 2017, Steve Sanderson showed off a new web UI framework with the caveat: “an experiment, something for you to be amused by.” By extending Dot Net Anywhere (DNA), Chris Bacon’s portable .NET runtime, on WebAssembly, he was able to load and run C# in the browser. In the browser!

Of course, this amusing experiment has grown into Blazor, a robust system for writing web UIs in C#. I was happy to talk to Steve Sanderson about his passions for the front-end web, how far Blazor has come, and what’s coming to Blazor in .NET 6.

Steve Sanderson profile photo

Years ago, you probably envisioned what Blazor could be. Has it met its potential, or are there other areas to focus on?

We’re not there yet. If you go on YouTube and find the first demo I ever did of Blazor at NDC Oslo in 2017, you’ll see my original prototype had near-instant live reloading while coding, and the download size was really tiny. I still aspire to get the real version of Blazor to have those characteristics. Of course, the prototype had the advantage of only needing to do a tiny number of things—creating a production-capable version is 100x more work, which is why it hasn’t yet got there, but has of course exceeded the prototype vastly in more important ways.

Good news though is that in .NET 6 we expect to ship an even better version of live-updating-while-coding than I had in that first prototype, so it’s getting there!

When looking at AOT, you’ll see increased performance but a larger download size. Do you see any other tradeoffs developers will need to consider?

The mixed-mode flavour of AOT, in which some of your code is interpreted and some is AOT, allows for a customizable tradeoff between size and speed, but also includes some subtleties like extra overhead when calling from AOT to interpreted code and vice-versa.

Also, when you enable AOT, your app’s publish time may go up substantially (maybe by 5-10 minutes, depending on code size) because the whole Emscripten toolchain just takes that long. This wouldn’t affect your daily development flow on your own machine, but likely means your CI builds could take longer.

It’s still quite impressive to see the entire .NET runtime run in the browser for Blazor Web Assembly. That comes with an upfront cost, as we know. I know that the Blazor team has done a ton of work to help lighten the footprint and speed up performance. With the exception of AOT, do you envision more work on this? Do you see a point where it’ll be as lightweight as other leading front-end frameworks, or will folks need to understand it’s a cost that comes with a full framework in the browser?

The size of the .NET runtime isn’t ever going to reduce to near-zero, so JS-based microframeworks (whose size could be just a few KB) are always going to be smaller. We’re not trying to win outright based on size alone—that would be madness. Blazor WebAssembly is aimed to be maximally productive for developers while being small enough to download that, in very realistic business app scenarios, the download size shouldn’t be any reason for concern.

That said, it’s conceivable that new web platform features like Signed HTTP Exchanges could let us smartly pre-load the .NET WebAssembly runtime in a browser in the background (directly from some Microsoft CDN) while you’re visiting a Blazor WebAssembly site, so that it’s instantly available at zero download size when you go to other Blazor WebAssembly sites. Signed HTTP Exchanges allow for a modern equivalent to the older idea of a cross-site CDN cache. We don’t have a definite plan about that yet as not all browsers have added support for it.

Check out the entire interview at my site.


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

👍 Design, testing, and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ How to use configuration with C# 9 top-level programs ]]> https://www.daveabrock.com/2021/01/19/config-top-level-programs/ 608c3e3df4327a003ba2fe85 Mon, 18 Jan 2021 18:00:00 -0600 I’ve been working with top-level programs in C# 9 quite a bit lately. When writing simple console apps in .NET 5, it allows you to remove the ceremony of a namespace and a Main(string[] args) method. It’s very beginner-friendly and allows developers to get going without worrying about learning about namespaces, arrays, arguments, and so on. While I’m not a beginner—although I feel like it some days—I enjoy using top-level programs to prototype things quickly.

With top-level programs, you can work with normal functions, use async and await, access command-line arguments, use local functions, and more. For example, here’s me working with some arbitrary strings and getting a random quote from the Ron Swanson Quotes API:

using System;
using System.Net.Http;

var name = "Dave Brock";
var weekdayHobby = "code";
var weekendHobby = "play guitar";
var quote = await new HttpClient().GetStringAsync("https://ron-swanson-quotes.herokuapp.com/v2/quotes");

Console.WriteLine($"Hey, I'm {name}!");
Console.WriteLine($"During the week, I like to {weekdayHobby} and on the weekends I like to {weekendHobby}.");
Console.WriteLine($"A quote to live by: {quote}");

Add configuration to a top-level program

Can we work with configuration with top-level programs? (Yes, should we is a different conversation, of course.)

To be clear, there are many, many ways to work with configuration in .NET. If you’re used to it in ASP.NET Core, for example, you’ve most likely done it from constructor dependency injection, wiring up a ServiceCollection in your middleware, or using the Options pattern—so you may think you won’t be able to do it with top-level programs.

Don’t overthink it. Using the ConfigurationBuilder, you can easily use configuration with top-level programs.

Let’s create an appsettings.json file to replace our hard-coded values with configuration values.

{
    "Name": "Dave Brock",
    "Hobbies": {
        "Weekday": "code",
        "Weekend": "play guitar"
    },
    "SwansonApiUri": "https://ron-swanson-quotes.herokuapp.com/v2/quotes"
}

Then, make sure your project file has the following packages installed, and that the appSettings.json file is being copied to the output directory:

  <ItemGroup>
    <PackageReference Include="Microsoft.Extensions.Configuration" Version="5.0.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="5.0.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.EnvironmentVariables" Version="5.0.0" />
  </ItemGroup>

  <ItemGroup>
    <None Update="appsettings.json">
      <CopyToOutputDirectory>Always</CopyToOutputDirectory>
    </None>
  </ItemGroup>

In your top-level program, create a ConfigurationBuilder with the appropriate values:

var config = new ConfigurationBuilder()
                 .SetBasePath(Directory.GetCurrentDirectory())
                 .AddJsonFile("appsettings.json")
                 .Build();

With a config instance, you’re ready to simply read in your values:

var name = config["Name"];
var weekdayHobby = config.GetSection("Hobbies:Weekday");
var weekendHobby = config.GetSection("Hobbies:Weekend");
var quote = await new HttpClient().GetStringAsync(config["SwansonApiUri"]);

And here’s the entire top-level program in action:

using Microsoft.Extensions.Configuration;
using System;
using System.IO;
using System.Net.Http;

var config = new ConfigurationBuilder()
                 .SetBasePath(Directory.GetCurrentDirectory())
                 .AddJsonFile("appsettings.json")
                 .Build();

var name = config["Name"];
var weekdayHobby = config.GetSection("Hobbies:Weekdays");
var weekendHobby = config.GetSection("Hobbies:Weekends");
var quote = await new HttpClient().GetStringAsync(config["SwansonApiUri"]);

Console.WriteLine($"Hey, I'm {name}!");
Console.WriteLine($"During the week, I like to {weekdayHobby.Value}" +
        $" and on the weekends I like to {weekendHobby.Value}.");
Console.WriteLine($"A quote to live by: {quote}");

Review the generated code

When throwing this in the ILSpy decompilation tool, you can see there’s not a lot of magic here. The top-level program is merely wrapping the code in a Main(string[] args) method and replacing our implicit typing:

using System;
using System.IO;
using System.Net.Http;
using System.Runtime.CompilerServices;
using System.Threading.Tasks;
using Microsoft.Extensions.Configuration;

[CompilerGenerated]
internal static class <Program>$
{
  private static async Task <Main>$(string[] args)
  {
    IConfigurationRoot config = new ConfigurationBuilder().SetBasePath(Directory.GetCurrentDirectory()).AddJsonFile("appsettings.json").Build();
    string name = config["Name"];
    IConfigurationSection weekdayHobby = config.GetSection("Hobbies:Weekday");
    IConfigurationSection weekendHobby = config.GetSection("Hobbies:Weekend");
    string quote = await new HttpClient().GetStringAsync(config["SwansonApiUri"]);
    Console.WriteLine("Hey, I'm " + name + "!");
    Console.WriteLine("During the week, I like to " + weekdayHobby.Value + " and on the weekends I like to " + weekendHobby.Value + ".");
    Console.WriteLine("A quote to live by: " + quote);
  }
}

Wrap up

In this quick post, I showed you how to work with configuration in C# 9 top-level programs. We showed how to use a ConfigurationBuilder to read from an appsettings.json file, and we also reviewed the generated code.

]]>
<![CDATA[ Dev Discussions: Steve Sanderson ]]> https://www.daveabrock.com/2021/01/17/dev-discussions-steve-sanderson/ 608c3e3df4327a003ba2fe84 Sat, 16 Jan 2021 18:00:00 -0600 This is the full interview from my discussion with Steve Sanderson in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today!

It seems like forever ago when, at NDC Oslo in 2017, Steve Sanderson showed off a new web UI framework with the caveat: “an experiment, something for you to be amused by.” By extending Dot Net Anywhere (DNA), Chris Bacon’s portable .NET runtime, on WebAssembly, he was able to load and run C# in the browser. In the browser!

Of course, this amusing experiment has grown into Blazor, a robust system for writing web UIs in C#. I was happy to talk to Steve Sanderson about his passions for the front-end web, how far Blazor has come, and what’s coming to Blazor in .NET 6.

Steve Sanderson profile photo

What geared you toward the front-end web, over other tech?

It’s not that I’m personally more motivated by front-end web over other tech. I’m just as motivated by all kinds of other technology, whether that’s backend web stuff, or further out fields like ML, graphics, or games—even things like agriculture automation.

What I have found though is that my professional life has had more impact in front-end web than in other fields. I’m not certain why, but suspect it’s been an under-focused area. When I started my first software job in 2003, being able to do anything in a browser with JS was considered unusual, so it was pretty easy to exceed the state of the art then.

My front-end experience started with Knockout.js, your MVVM front-end framework. We’ve come a long away since then, for sure. Have any of your experience and learnings from Knockout impacted your work on Blazor?

Certainly. Knockout was my first critical-mass open source project, so that’s where I was forced to design APIs to account for how developers get confused and do things contrary to their own best interests, what to expect from the community, and where to draw the line about what features should be in or out of a library/framework.

Years ago, you probably envisioned what Blazor could be. Has it met its potential, or are there other areas to focus on?

We’re not there yet. If you go on YouTube and find the first demo I ever did of Blazor at NDC Oslo in 2017, you’ll see my original prototype had near-instant live reloading while coding, and the download size was really tiny. I still aspire to get the real version of Blazor to have those characteristics. Of course, the prototype had the advantage of only needing to do a tiny number of things—creating a production-capable version is 100x more work, which is why it hasn’t yet got there, but has of course exceeded the prototype vastly in more important ways.

Good news though is that in .NET 6 we expect to ship an even better version of live-updating-while-coding than I had in that first prototype, so it’s getting there!

The Blazor use case for .NET devs is quite clear. Do you see Blazor reaching a lot of folks in the JavaScript community (I’m thinking of Angular, where the learning curve is steep), or do you see Blazor’s impact limited to the .NET ecosystem?

Longer term I think it depends on the fundamentals: download size and perf. With .NET 5, Blazor WebAssembly’s main selling point is the .NET code, which easily makes it the best choice of framework for a lot of .NET-centric teams, but on its own that isn’t enough to win over a JS-centric team.

If we can get Blazor WebAssembly to be faster than JS in typical cases (via AoT compilation, which is very achievable) and somehow simultaneously reduce download sizes to the point of irrelevance, then it would be very much in the interests of even strongly JS-centric teams to reconsider and look at all the other benefits of C#/.NET too.

When looking at AOT, you’ll see increased performance but a larger download size. Do you see any other tradeoffs developers will need to consider?

The mixed-mode flavour of AOT, in which some of your code is interpreted and some is AOT, allows for a customizable tradeoff between size and speed, but also includes some subtleties like extra overhead when calling from AOT to interpreted code and vice-versa.

Also, when you enable AOT, your app’s publish time may go up substantially (maybe by 5-10 minutes, depending on code size) because the whole Emscripten toolchain just takes that long. This wouldn’t affect your daily development flow on your own machine, but likely means your CI builds could take longer.

It’s still quite impressive to see the entire .NET runtime run in the browser for Blazor Web Assembly. That comes with an upfront cost, as we know. I know that the Blazor team has done a ton of work to help lighten the footprint and speed up performance. With the exception of AOT, do you envision more work on this? Do you see a point where it’ll be as lightweight as other leading front-end frameworks, or will folks need to understand it’s a cost that comes with a full framework in the browser?

The size of the .NET runtime isn’t ever going to reduce to near-zero, so JS-based microframeworks (whose size could be just a few KB) are always going to be smaller. We’re not trying to win outright based on size alone—that would be madness. Blazor WebAssembly is aimed to be maximally productive for developers while being small enough to download that, in very realistic business app scenarios, the download size shouldn’t be any reason for concern.

That said, it’s conceivable that new web platform features like Signed HTTP Exchanges could let us smartly pre-load the .NET WebAssembly runtime in a browser in the background (directly from some Microsoft CDN) while you’re visiting a Blazor WebAssembly site, so that it’s instantly available at zero download size when you go to other Blazor WebAssembly sites. Signed HTTP Exchanges allow for a modern equivalent to the older idea of a cross-site CDN cache. We don’t have a definite plan about that yet as not all browsers have added support for it.

I know that you aren’t a Xamarin expert, but I’m intrigued by Blazor Mobile Bindings. Do you see there being convergence between MAUI and Blazor, or is this a “do what works for you” platform decision?

Who says I’m not a Xamarin expert, huh? Well, OK, I admit it—I’m not.

Our ideas around MAUI are pretty broad and allow for a lot of different architecture and syntax choices, without having a definite confirmation yet about what exactly are the most first-class built-in options. So I don’t think any conclusion exists here yet.

Speaking more broadly from the last question: do you see Blazor as the future of native app development in .NET, for mobile and desktop apps?

My guess is there will always be variety. .NET has always supported many different UI programming models (WinForms, WebForms, WPF, UWP, Xamarin, MVC, Razor Pages, Blazor, Unity). The idea that everything would converge on a single one true framework seems unlikely, because different customer groups have different goals and demands.

Blazor is perhaps the option that gives the widest reach across device types, as it’s obviously web-native, but can also target desktop and mobile apps via either web or native rendering within native app shells.

When looking ahead to .NET 6, in the next year or so, what’s coming with Blazor?

See our published roadmap. 😊

[Ed. Note: Fair enough. The heavy hitters include AOT compilation, hot reload, global exception handling, and required parameters for components.]

Does Blazor hold up from a load testing perspective? Would it support Amazon-like scale?

Blazor Server or Blazor WebAssembly?

Blazor WebAssembly has the benefit of imposing no per-client runtime cost on the server—from the server’s perspective, it’s just some static content to transmit. So in that sense it inherently scales near-infinitely. But for a public-facing shopping cart app, you most likely want server-rendered HTML to maximise SEO and minimize the risk that any potential customers fail to use the site just because of the initial page load time.

Blazor Server is a more obvious choice for a public-facing shopping cart app. We know it scales well, and a typical cloud server instance can comfortably manage 20k very active concurrent users—the primary limitation is RAM, not CPU. I don’t personally know how many front-end servers are involved in serving Amazon’s pages or how many concurrent users they are dealing with. However I do know that virtually everybody building business web apps is operating at a vastly smaller scale than Amazon, and can likely estimate (to within an order of magnitude) how many concurrent users they want to plan for and thus what server capacity is needed.

What is your one piece of programming advice?

When something isn’t working or behaves differently than you expected, don’t just keep changing things until it seems to work, as a lot of developers do. Make sure you figure out why it was doing what it was doing, otherwise you’re not really advancing your skills.

You can connect with Steve Sanderson on Twitter.

]]>
<![CDATA[ The .NET Stacks #32: 😎 SSR is cool again ]]> https://www.daveabrock.com/2021/01/16/dotnet-stacks-32/ 608c3e3df4327a003ba2fe83 Fri, 15 Jan 2021 18:00:00 -0600 Good morning and happy Monday! We’ve got a few things to discuss this week:

  • The new/old hotness: HTML over the wire
  • Xamarin.Forms 5.0 released this week
  • Quick break: how to explaining C# string interpolation to the United States Senate
  • Last week in the .NET world

The new/old hotness: server-side rendering

Over the holidays, I was intrigued by the release of the Hotwire project, from the folks at Basecamp:

Hotwire is an alternative approach to building modern web applications without using much JavaScript by sending HTML instead of JSON over the wire. This makes for fast first-load pages, keeps template rendering on the server, and allows for a simpler, more productive development experience in any programming language, without sacrificing any of the speed or responsiveness associated with a traditional single-page application.

Between this and other tech such as Blazor Server, the “DOM over the wire” movement is in full force. It’s a testament to how bloated and complicated the front end has become.

Obviously, rendering partial HTML over the wire isn’t anything new at all—especially to us .NET developers—and it’s sure to bring responses like: “Oh, you mean what I’ve been doing the last 15 years?” As much as I enjoy the snark, it’s important to not write it off as the front-end community embracing what we’ve become comfortable with, as the technical details differ a bit—and we can learn from it. For example, it looks like instead of Hotwire working with DOM diffs over the wire, it streams partial updates over WebSocket while dividing complex pages into separate components, with an eye on performance. I wonder how Blazor Server would have been architected if this was released 2 years ago.

Xamarin.Forms 5.0 released this week

This week, the Xamarin team released the latest stable release of Xamarin.Forms, version 5.0, which will be supported through November 2022. There’s updates for App Themes, Brushes, and SwipeView, among other things. The team had a launch party. Also, David Ramel writes that this latest version drops support for Visual Studio 2017. Updates to Android and iOS are only delivered to 2019, and pivotal for getting the latest updates from Apple and Google.

2021 promises to be a big year for Xamarin, as they continue preparing to join .NET 6—as this November, Xamarin.Forms evolves into MAUI (the .NET Multi-Platform App UI). This means more than developing against iPhones and Android devices, of course. With .NET 6 this also includes native UIs for iOS, Android, and desktops. As David Ramel also writes, Linux will not be supported out of the gate and VS Code support will be quite limited.

As he also writes, in a community standup David Ortinau clarifies that MAUI is not a rewrite.

So my hope and expectation, depending on the complexity of your projects, is you can be up and going within days … It’s not rewrites – it’s not a rewrite – that’s probably the biggest message that I should probably say over and over and over again. You’re not rewriting your application.

Quick break: how to explain C# string interpolation to the United States Senate

Did I ever think C# string interpolation would make it to the United States Senate? No, I most certainly did not. But last month, that’s what happened as former Cybersecurity and Infrastructure Security Agency (CISA) head Chris Krebs explained a bug:

It’s on page 20 … it says ‘There is no permission to {0}’. … Something jumped out at me, having worked at Microsoft. … The election-management system is coded with the programming language called C#. There is no permission to {0}’ is placeholder for a parameter, so it may be that it’s just not good coding, but that certainly doesn’t mean that somebody tried to get in there a 0. They misinterpreted the language in what they saw in their forensic audit.

It appears that the election auditors were scared by something like this:

Console.WriteLine("There is no permission to {0}");

To us, we know it’s just a log statement that verifies permission checks are working. It should have been coded using one of the following lines of code:

Console.WriteLine("There is no permission to {0}", permission);
Console.WriteLine($"There is no permission to {permission}");

I’m available to explain string interpolation to my government for a low, low rate of $1000 an hour. All they had to do was ask.


🌎 Last week in the .NET world

🔥 The Top 4

📢 Announcements

📅 Community and events

🌎 Web development

🥅 The .NET platform

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Blast Off with Blazor: Build a search-as-you-type box ]]> https://www.daveabrock.com/2021/01/14/blast-off-blazor-search-box/ 608c3e3df4327a003ba2fe82 Wed, 13 Jan 2021 18:00:00 -0600 So far in our series, we’ve walked through the intro, wrote our first component, dynamically updated the HTML head from a component, isolated our service dependencies, worked on hosting our images over Azure Blob Storage and Cosmos DB, built a responsive image gallery, and implemented prerendering.

With a full result set from Cosmos DB, in this post we’ll build a quick search box. While I was originally thinking I should execute queries against our database, I then thought: You dummy, you already have the data, just filter it. My inner voice doesn’t always offer sage advice, but it did this time.

Anyway, here’s how it’ll look.

Our search box

You’ve likely noticed the absence of a search button. This will be in the “search-as-you-type” style, as results will get filtered based on what the user types. It’s a lot less work than you might think.

This post covers the following content.

The “easy” part: our filtering logic

In very simplistic terms, here’s what we’ll do: a user will type some text in the search box, and we’ll use a LINQ query to filter it. Then, we’ll display the filtered results.

In Images.razor.cs, we’ll add SearchText. This field captures what the user enters. We’ll take that string, and see if we have any items that have a part of the string in the Title. We’ll want to make sure to initialize it to an empty string. That way, if nothing is entered, we’ll display the whole result set.

public string SearchText = "";

Then, I’ll include a FilteredImages field. This will return a filtered List<Image> that uses the LINQ query in question to filter based on the SearchText.

List<Image> FilteredImages => ImageList.Where(
    img => img.Title.ToLower().Contains(SearchText.ToLower())).ToList();

This part is relatively simple. We can do this in any type of app—Blazor, Web API, MVC, whatever. The power of Blazor is how we can leverage data binding to “connect” the data with our components to synchronize them and update our app accordingly. Let’s first understand data binding as a general concept.

Understand data binding

Data binding is not unique to Blazor. It’s essential to virtually all single-page application (SPA) libraries and frameworks. You can’t get away from displaying, updating, and receiving data.

The simplest example is with one-way binding. As inferred from the name, this means data updates only flow in one direction. In these cases, the application is responsible for handling the data. The gist: you have some data, an event occurs, and the data changes. There’s no need for a user to control the data yet.

In terms of events, by far the most common is a button click in Blazor:

<p>@TimesClicked</p>

<button @onclick="UpdateTimesClicked">Update Click Count</button>

@code {
    private int TimesClicked { get; set; } = 1;

    private void UpdateTimesClicked()
    {
        TimesClicked++;
    }
}

In this example the TimesClicked is bound to the component with the @ symbol. When a user clicks a button, the value changes—and, because an event handler was executed, Blazor triggers a re-render for us. Don’t let the simplicity of one-way binding fool you: in many cases, it’s all you need.

Whenever we need input from a user, we need to track that data and also bind it to a field. In our case, we need to know what the user is typing, store it in SearchText, and execute filtering logic as it changes. Because it needs to flow in both directions, we need to use two-way binding. For ASP.NET Core in particular, data binding involves … binding … a @bind HTML element with a field, property, or Razor expression.

Understand how @bind and @bind-value work

In the ASP.NET Core documentation for Blazor data binding, this line should catch your eye:

When one of the elements loses focus, its bound field or property is updated.

In this case @bind works with an onchange handler after the input loses focus (like when a user tabs out). However, that’s not what we want. We want updates to occur as a user is typing and the ability to control which event triggers an update. After all, we don’t want to wait for a user to lose focus (and patience). Instead, we can use @bind-value. When we use @bind-value:event="event", we can specify a valid event like oninput, keydown, keypress, and so on. In our case, we’ll want to use oninput, or whenever the user types something.

Don’t believe me? That’s fine, we’re only Internet friends, I understand. You can go check out the compiled components in your project’s obj/Debug/net5.0/Razor/Pages directory. Here’s how my BuildRenderTree method looks with @bind-value (pay attention to the last line):

protected override void BuildRenderTree(Microsoft.AspNetCore.Components.Rendering.RenderTreeBuilder __builder)
{
    __builder.OpenElement(0, "div");
    __builder.AddAttribute(1, "class", "text-center bg-blue-100");
    __builder.AddAttribute(2, "b-bc0k7zrx0q");
    __builder.OpenElement(3, "input");
    __builder.AddAttribute(4, "class", "border-4 w-1/3 rounded m-6 p-6 h-8\r\n               border-blue-300");
    __builder.AddAttribute(5, "placeholder", "Search by title");
    __builder.AddAttribute(6, "value", Microsoft.AspNetCore.Components.BindConverter.FormatValue(SearchText));
    __builder.AddAttribute(7, "oninput", Microsoft.AspNetCore.Components.EventCallback.Factory.CreateBinder(this, __value => SearchText = __value, SearchText));
    // and more ...
}

Put it together

When we set @bind-value to SearchText and @bind-value:event to oninput, we’ll be in business. Here’s how we’ll wire up the search box:

<div class="text-center bg-blue-100">
    <input class="border-4 w-1/3 rounded m-6 p-6 h-8
               border-blue-300" @bind-value="SearchText"
           @bind-value:event="oninput" placeholder="Search by title" />
</div>

Finally, as we’re iterating through our images, pass in FilteredImages instead:

@foreach (var image in FilteredImages)
{
    <ImageCard ImageDetails="image" />
}

So now, here’s the entire code for our Images component:

@page "/images"
<div class="text-center bg-blue-100">
    <input class="border-4 w-1/3 rounded m-6 p-6 h-8
               border-blue-300" @bind-value="SearchText"
           @bind-value:event="oninput" placeholder="Search by title" />
</div>

@if (!ImageList.Any())
{
    <p>Loading some images...</p>
}
else
{
    <div class="p-2 grid grid-cols-1 sm:grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-3">
        @foreach (var image in FilteredImages)
        {
            <ImageCard ImageDetails="image" />
        }
    </div>
}

And the Images partial class:

using BlastOff.Shared;
using Microsoft.AspNetCore.Components;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace BlastOff.Client.Pages
{
    partial class Images : ComponentBase
    {
        public IEnumerable<Image> ImageList { get; set; } = new List<Image>();

        public string SearchText = "";

        [Inject]
        public IImageService ImageService { get; set; }

        protected override async Task OnInitializedAsync()
        {
            ImageList = await ImageService.GetImages(days: 70);
        }

        List<Image> FilteredImages => ImageList.Where(
            img => img.Title.ToLower().Contains(SearchText.ToLower())).ToList();
    }
}

Here’s the search box in action. Look at us go.

Wrap up

In this post, I wrote how to implement a quick “type-as-you-go” search box in Blazor. We wrote filtering logic, understood how data binding works, compared @bind and @bind-value in Blazor, and finally put it all together.

There’s certainly more we can do here—things like debouncing come in handy when we want to control (and sometimes delay) how often searches are executed. I’m less concerned because this is all executed on the client, but would definitely come into play when we want to limit excessive trips to the database.

]]>
<![CDATA[ The .NET Stacks #31: 🥳 10 things to kick off '21 ]]> https://www.daveabrock.com/2021/01/09/dotnet-stacks-31/ 608c3e3df4327a003ba2fe81 Fri, 08 Jan 2021 18:00:00 -0600 Note: This is the published version of my free, weekly newsletter, The .NET Stacks. It was originally sent to subscribers on January 4, 2021. Subscribe at the bottom of this post to get the content right away!

Well, friends, our dream has finally come true: 2020 is now over.

I hope your 2021 is off to a great start, and also hope you and your families were able to enjoy the holidays. I was lucky enough to take a break. I kept busy: hung out with the kids, watched a bunch of bad TV, worked out (which included shoveling snow three times; thanks, Wisconsin), tried out streaming, hacked around on some side projects, and generally just had fun with the time off.

This week, let’s play catch up. Let’s run through 10 things to get us ready for what’s to come in 2021. Ready?


With .NET 5 out to the masses, we’re beginning to get past a lot of the novelty of source generators—there’s some great use cases out there. (If you aren’t familiar, source generators are a piece of code that runs during compilation and can inspect your app to produce additional files that are compiled together from the rest of your code.)

To that end, Tore Nestenius writes about a source generator that automatically generates an API for a system that uses the MediatR library and the CQRS pattern. What a world.

Expect to see a lot of innovation with source generators this year.


In my Blast Off with Blazor blog series, I’ve been using Tailwind CSS to mark up my components—you can see it in action where I build a responsive image gallery. Instead of Bootstrap, which can give you previously styled UI components, it’s a utility-first library that allows you to style classes in your markup. You can avoid the hassle of overriding stylesheets when things aren’t to your liking.

To be clear, you may cringe at first when you see this. (I know I did.)

<div class="m-6 rounded overflow-hidden shadow-lg">
    <img class="w-full h-48 object-cover" src="@ImageDetails.Url" alt="@ImageDetails.Title" />
    <div class="p-6">
        <div class="flex items-baseline">
            <!-- stuff -->
        </div>
        <h3 class="mt-1 font-semibold text-2xl leading-tight truncate">@ImageDetails.Title</h3>
    </div>
</div>

Once you get used to it, though, you’ll begin to see its power. Adam Wathan, the creator of Tailwind, wrote a piece that helps you get over the initial visceral reaction.


Over the weekend, I asked this question:

This led to entertaining answers. A lot of people talked about going professional with their hobbies, like with music and art. I also found it interesting how many answers had parallels to programming, like being a cook, architecting buildings, and building and maintaining things.


What can’t you do with GitHub Actions? David Pine wrote an Action that performs machine-translations from Azure Cognitive Services for .NET localization. Tim Heuer used GitHub Actions for bulk resolving. Of course, you could go on and on.

The main use case is clear: workflows centered around your GitHub deployments, which goes hand-in-hand with it being the long-term solution to the Azure DevOps platform. My prediction is that in 2021, the feature gap between Azure DevOps and GitHub Actions will narrow significantly.

The success of GitHub Actions is sure to blur the lines between all the Microsoft solutions—between Azure Functions, Actions, and WebJobs, for example. While the main use case for Actions is deployment workflows, it’s hard to beat an app backed by a quick YAML file.


We’ve talked about System.Text.Json quite a few times before. The gist from those posts: System.Text.Json is a new-ish native JSON serialization library for .NET Core. It’s fast and performant but isn’t as feature-rich as Newtonsoft.Json (this table gets worse the more you scroll)—so if you’re already using Newtonsoft, stick with it unless you’ve got advanced performance considerations.

A few days after Christmas, Microsoft’s Layomi Akinrinade provided an update on System.Text.Json. .NET 5 was a big release for the library, as they pushed a lot of updates that brought more Newtonsoft-like functionality: you’ll see the ability to deserialize paramterized constructors, conditionally ignoring properties (thank you), C# record types, and constructors that take serialization defaults.

With .NET 6, Microsoft plans to “address the most requested features that help drive System.Text.Json to be a viable choice for the JSON stack in more .NET applications.” This is after writing (in the same post) that in .NET Core 3.0, “we made System.Text.Json the default serializer for ASP.NET Core because we believe it’s good enough for most applications.” So, that’s interesting. With Newtonsoft expected to top the NuGet charts for the foreseeable future, it’s nice to see Microsoft understanding there’s much work to do, still, to encourage folks to move over to their solution.


Egil Hansen, the man behind bUnit—the Blazor component testing library—describes how component vendors can make their components easily testable with a one-liner. Developers can now call an extension method based on bUnit’s TestContext that users can call before kicking off tests. The extension method can then be published as a NuGet package with a version matching the compatible library.

Check out the Twitter thread for details.


Did you know that you can target attributes in C# 9 records using a prefix of property: or field:? You do now.


The .NET Foundation released their November/December 2020 update just before the holidays. There’s a community survey you can fill out, that’s available until the end of March. They’ve also launched a speaker directory.


Speaking of the .NET Foundation, tonight I watched board member Shawn Wildermuth’s new film, Hello World. It begins as a love letter to our industry, then takes a sobering look at our diversity and inclusion issues. It’s a wonderful film that everyone should see. It’s a cheap rental (check out the site to see where to rent it)—and once there you can also help support organizations that are helping push our industry to where it should be.


As you may or may not have noticed, the Azure SDKs have gotten a facelift. During our break, Jeffrey Richter wrote about its architecture.

Crucial to the SDK design—and most importantly, crucial to me—is that retry and cancellation mechanisms are built in. Additionally, you can implement your own policies in their pipeline (think ASP.NET Core middleware) for things like caching and mocking. Right on cue, Pavel Krymets writes about unit testing and mocking with the .NET SDKs.


🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

🥅 The .NET platform

🌎 Web development

⛅ The cloud

📔 Languages

🔧 Tools

🔐 Security

👍 Design, architecture, and testing

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ More with Gruut: Use the Microsoft Bot Framework to analyze emotion with the Azure Face API ]]> https://www.daveabrock.com/2021/01/05/azure-bot-service-image-emotion-api/ 608c3e3df4327a003ba2fe80 Mon, 04 Jan 2021 18:00:00 -0600 Last summer—it was 2020, I think it was summer—I published a post that showed off how to use Azure text sentiment analysis with the Microsoft Bot Framework. I used it to build on Shahed Chowdhuri’s GruutChatbot project. The gist was that Gruut would respond with I am Gruut differently based on the conveyed emotion of the text that a user enters. (As before, the project doesn’t have a legal team so we’re using the Gruut naming to avoid any issues from the Smisney Smorporation.)

Here’s a thought: since the Bot Framework allows users to send attachments, what if we send him a picture? How would Gruut react to sending images of a happy-seeming person, or an upset-seeming person? We can do this with the Azure Face API, which allows us to detect perceived emotions from faces in images. That’s what we’ll do in this post.

Note: While we’re using this service innocently, face detection is a serious topic. Make sure you understand what you are doing, and how the data is used. A good first step is to check out the Azure Cognitive Services privacy policy. For this post, we’ll be using stock images.

Before you start

In this post, we won’t walk through installing the Bot Framework SDK and Emulator, creating the bot project, and so on—that was covered in the first post on the topic. Check out that post if you need to get acclimated.

We’ll need to create a Face resource in Azure (yes, that’s what it’s called). Once you create that, grab the Endpoint (from the resource in the Azure Portal) and also a key from the Keys and Endpoint section. You can then throw those into Azure Key Vault with the other values (check out the previous post for those instructions).

Process the attachment

First, we need to process the attachment. This involves getting the location from the chat context, and downloading the image to a temporary path. Then, we’ll be ready to pass it to the Cognitive Services API. In our renamed GruutBot class, we now need to conditionally work with attachments.

Here’s how it looks. We’ll dig deeper after reviewing the code.

protected override async Task OnMessageActivityAsync(
            ITurnContext<IMessageActivity> turnContext,
            CancellationToken cancellationToken)
{
    string replyText;

    if (turnContext.Activity.Attachments is not null)
    {
      // get url from context, then download to pass along to the service
      var fileUrl = turnContext.Activity.Attachments[0].ContentUrl;
      var localFileName = Path.Combine(Path.GetTempPath(),
              turnContext.Activity.Attachments[0].Name);

      using var webClient = new WebClient();
      webClient.DownloadFile(fileUrl, localFileName);

      replyText = await _imageService.GetGruutResponse(localFileName);
    }
    else
    {
      replyText = await _textService.GetGruutResponse(turnContext.Activity.Text, cancellationToken);
    }

    await turnContext.SendActivityAsync(MessageFactory.Text(replyText, replyText), cancellationToken);
}

Let’s focus on what happens when we receive the attachment.

if (turnContext.Activity.Attachments is not null)
{
    // get url from context, then download to pass along to the service
    var fileUrl = turnContext.Activity.Attachments[0].ContentUrl;
    var localFileName = Path.Combine(Path.GetTempPath(),
    turnContext.Activity.Attachments[0].Name);

    using var webClient = new WebClient();
    webClient.DownloadFile(fileUrl, localFileName);

    replyText = await _imageService.GetGruutResponse(localFileName);
}

For now, let’s (blindly) assume we only have one attachment, and the attachment is an image. From the turnContext we can retrieve information about the sent message. In our case, we want the ContentUrl of the attachment. We’ll then store it in a temporary path, with the context’s Name. Then, using the WebClient API, we can download the file to the local temp directory. With a path set, we can now inspect our GetGruutResponse method in a new image service, which takes a path to where the image resides.

Write a new ImageAnalysisService

Because we’re now using text and analysis SDKs in this app, we should split them out into different services. Let’s begin with a new ImageAnalysisService.

To kick off a new ImageAnalysisService.cs file, we’ll read in the settings using the ASP.NET Core Options pattern with ImageApiOptions. These align with what I’ve got stored in the Azure Key Vault.

namespace GruutChatbot.Services.Options
{
    public class ImageApiOptions
    {
        public string FaceEndpoint { get; set; }
        public string FaceCredential { get; set; }
    }
}

Then, after bringing in the Microsoft.Azure.CognitiveServices.Vision.Face NuGet library, we can use constructor dependency injection to “activate” our services. Here’s the beginning of the class:

namespace GruutChatbot.Services
{
    public class ImageAnalysisService
    {
        readonly ImageApiOptions _imageApiSettings;
        readonly ILogger<ImageAnalysisService> _logger;
        readonly FaceClient _imageClient;

        public ImageAnalysisService(
            IOptions<ImageApiOptions> options,
            ILogger<ImageAnalysisService> logger)
        {
            _imageApiSettings = options.Value ?? throw new
                ArgumentNullException(nameof(options),
                "Image API options are required.");
            _logger = logger;
            _imageClient = new FaceClient(
                new ApiKeyServiceClientCredentials(
                    _imageApiSettings.FaceCredential))
            {
                Endpoint = _imageApiSettings.FaceEndpoint
            };
        }
}

Now, we’ll write a GetGruutResponse method that takes the file path to our downloaded attachment. Here’s how the method looks (we’ll dig in after):

public async Task<string> GetGruutResponse(string filePath)
{
  try
  {
    var faceAttributes = new List<FaceAttributeType?>
       { FaceAttributeType.Emotion};

    using var imageStream = File.OpenRead(filePath);

    var result = await _imageClient.Face.DetectWithStreamAsync(
              imageStream, true, false, faceAttributes);

    return GetReplyText(GetHeaviestEmotion(result));
  }
  catch (Exception ex)
  {
    _logger.LogError(ex.Message, ex);
  }
  return string.Empty;
}

First, we need to pass in a List<FaceAttributeType?>, which tells the SDK which face attributes you want back. There’s a ton of options, like FacialHair, Gender, Hair, Blur, and so on—we just need Emotion.

var faceAttributes = new List<FaceAttributeType?>
                    { FaceAttributeType.Emotion};

Then, we’ll open up a FileStream for the file in question.

using var imageStream = File.OpenRead(filePath);

To make the Cognitive Services call, we can call the DetectWithStreamAsync method. The true switch is to return a faceId, and the false is for not returning landmarks.

var result = await _imageClient.Face.DetectWithStreamAsync(
                    imageStream, true, false, faceAttributes);

When we call this, we’re going to get a list of all different emotions and their values, from 0 to 1. For us, we’d like to pick the emotion that carries the most weight. To do that, I wrote a GetHeaviestEmotion method. This is a common use case, as the SDK has a ToRankedList extension method that suits my needs.

private string GetHeaviestEmotion(IList<DetectedFace> imageList) =>
            imageList.FirstOrDefault().FaceAttributes.Emotion.
                ToRankedList().FirstOrDefault().Key;

Now, all that’s left is to figure out what to send to Gruut. I created a GruutMoods class that just has some static readonly strings for ease of use:

namespace GruutChatbot.Services
{
    public static class GruutMoods
    {
        public readonly static string PositiveGruut = "I am Gruut.";
        public readonly static string NeutralGruut = "I am Gruut?";
        public readonly static string NegativeGruut = "I AM GRUUUUUTTT!!";
        public readonly static string FailedGruut = "I.AM.GRUUUUUT";
    }
}

Now, we can use a switch expression to determine what to send back:

Note: I classified the perceived emotions based on trial-and-error using a few images. Your mileage may vary.

private static string GetReplyText(string emotion) => emotion switch
{
  "Happiness" or "Surprise" => GruutMoods.PositiveGruut,
  "Anger" or "Contempt" or "Disgust" or "Sadness" =>
           GruutMoods.NegativeGruut,
  "Neutral" or "Fear" => GruutMoods.NeutralGruut,
  _ => GruutMoods.FailedGruut
};

So now, in GetGruutResponse this is what we’ll return:

return GetReplyText(GetHeaviestEmotion(result));

Again, here’s all of GetGruutResponse for your reference:

public async Task<string> GetGruutResponse(string filePath)
{
    try
    {
      var faceAttributes = new List<FaceAttributeType?>
        { FaceAttributeType.Emotion};

      using var imageStream = File.OpenRead(filePath);

      var result = await _imageClient.Face.DetectWithStreamAsync(
                  imageStream, true, false, faceAttributes);

      return GetReplyText(GetHeaviestEmotion(result));
    }
    catch (Exception ex)
    {
      _logger.LogError(ex.Message, ex);
    }

    return string.Empty;
}

And finally, the entirety of ImageAnalysisService:

using GruutChatbot.Services.Options;
using Microsoft.Azure.CognitiveServices.Vision.Face;
using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;

namespace GruutChatbot.Services
{
    public class ImageAnalysisService
    {
        readonly ImageApiOptions _imageApiSettings;
        readonly ILogger<ImageAnalysisService> _logger;
        readonly FaceClient _imageClient;

        public ImageAnalysisService(
            IOptions<ImageApiOptions> options,
            ILogger<ImageAnalysisService> logger)
        {
            _imageApiSettings = options.Value ?? throw new
                ArgumentNullException(nameof(options),
                "Image API options are required.");
            _logger = logger;
            _imageClient = new FaceClient(
                new ApiKeyServiceClientCredentials(
                    _imageApiSettings.FaceCredential))
            {
                Endpoint = _imageApiSettings.FaceEndpoint
            };
        }

        public async Task<string> GetGruutResponse(string filePath)
        {
            try
            {
                var faceAttributes = new List<FaceAttributeType?>
                    { FaceAttributeType.Emotion};

                using var imageStream = File.OpenRead(filePath);

                var result = await _imageClient.Face.DetectWithStreamAsync(
                    imageStream, true, false, faceAttributes);

                return GetReplyText(GetHeaviestEmotion(result));
            }
            catch (Exception ex)
            {
                _logger.LogError(ex.Message, ex);

            }

            return string.Empty;
        }

        private string GetHeaviestEmotion(IList<DetectedFace> imageList) =>
            imageList.FirstOrDefault().FaceAttributes.Emotion.
                ToRankedList().FirstOrDefault().Key;

        private static string GetReplyText(string emotion) => emotion switch
        {
            "Happiness" or "Surprise" => GruutMoods.PositiveGruut,
            "Anger" or "Contempt" or "Disgust" or "Sadness" =>
              GruutMoods.NegativeGruut,
            "Neutral" or "Fear" => GruutMoods.NeutralGruut,
            _ => GruutMoods.FailedGruut
        };
    }
}

Also, if you aren’t aware, you’ll need to add the service to the DI container in Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
    // Create the Bot Framework Adapter with error handling enabled.
    services.AddSingleton<IBotFrameworkHttpAdapter, AdapterWithErrorHandler>();

    // Create the bot as a transient. In this case the ASP Controller is expecting an IBot.
    services.AddTransient<IBot, GruutBot>();

    // some stuff removed for brevity

    services.Configure<ImageApiOptions>(Configuration.GetSection(nameof(ImageApiOptions)));
    services.Configure<TextApiOptions>(Configuration.GetSection(nameof(TextApiOptions)));

    services.AddSingleton<ImageAnalysisService>();
    services.AddSingleton<TextAnalysisService>();    
}

Repeat with TextAnalysisService

With this in place, a refactored TextAnalysisService takes the same shape and looks like this:

using Azure;
using Azure.AI.TextAnalytics;
using GruutChatbot.Services.Options;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using System;
using System.Threading;
using System.Threading.Tasks;

namespace GruutChatbot.Services
{
    public class TextAnalysisService
    {
        readonly TextApiOptions _textApiOptions;
        readonly ILogger<TextAnalysisService> _logger;
        readonly TextAnalyticsClient _textClient;

        public TextAnalysisService(
            IOptions<TextApiOptions> options,
            ILogger<TextAnalysisService> logger)
        {
            _textApiOptions = options.Value ?? throw new
                ArgumentNullException(nameof(options),
                "Text API options are required.");
            _logger = logger;
            _textClient = new TextAnalyticsClient(
                new Uri(_textApiOptions.CognitiveServicesEndpoint),
                new AzureKeyCredential(_textApiOptions.AzureKeyCredential));
        }

        public async Task<string> GetGruutResponse(string inputText, 
                CancellationToken cancellationToken)
        {
            try
            {
                var result = await _textClient.AnalyzeSentimentAsync(
                    inputText,
                    cancellationToken: cancellationToken);

                return GetReplyText(result.Value.Sentiment);
            }
            catch (Exception ex)
            {
                _logger.LogError(ex.Message, ex);

            }

            return string.Empty;
        }

        static string GetReplyText(TextSentiment sentiment) => sentiment switch
        {
            TextSentiment.Positive => GruutMoods.PositiveGruut,
            TextSentiment.Negative => GruutMoods.NegativeGruut,
            TextSentiment.Neutral => GruutMoods.NeutralGruut,
            _ => GruutMoods.FailedGruut
        };
    }
}

The final product

Now, with our SDK calls refactored to services, here’s our full GruutBot:

using GruutChatbot.Services;
using Microsoft.Bot.Builder;
using Microsoft.Bot.Schema;
using System.Collections.Generic;
using System.IO;
using System.Net;
using System.Threading;
using System.Threading.Tasks;

namespace GruutChatbot.Bots
{
    public class GruutBot : ActivityHandler
    {
        private readonly TextAnalysisService _textService;
        private readonly ImageAnalysisService _imageService;

        public GruutBot(TextAnalysisService textService, ImageAnalysisService imageService)
        {
            _textService = textService;
            _imageService = imageService;
        }

        protected override async Task OnMessageActivityAsync(
            ITurnContext<IMessageActivity> turnContext,
            CancellationToken cancellationToken)
        {
            string replyText;

            if (turnContext.Activity.Attachments is not null)
            {
                // get url from context, then download to pass along to the service
                var fileUrl = turnContext.Activity.Attachments[0].ContentUrl;
                var localFileName = Path.Combine(Path.GetTempPath(),
                    turnContext.Activity.Attachments[0].Name);

                using var webClient = new WebClient();
                webClient.DownloadFile(fileUrl, localFileName);

                replyText = await _imageService.GetGruutResponse(localFileName);
            }
            else
            {
                replyText = await _textService.GetGruutResponse(turnContext.Activity.Text, cancellationToken);
            }

            await turnContext.SendActivityAsync(MessageFactory.Text(replyText, replyText), cancellationToken);

        }

        protected override async Task OnMembersAddedAsync(
            IList<ChannelAccount> membersAdded,
            ITurnContext<IConversationUpdateActivity> turnContext,
            CancellationToken cancellationToken)
        {
            // welcome new users
        }
    }
}

The bot in action

Now, let’s see how the bot looks when we send Gruut text and images.

GruutBot in action

Wrap up

In this post, we showed how to use Azure Cognitive Services face emotion detection with the Microsoft Bot Framework. We processed and downloaded attachments, passed them to a new ImageAnalysisService, and called the Face API to detect perceived emotion. Then, we wrote logic to send a message back to Gruut. Finally, we showed a refactoring of the existing text analysis to a new TextAnalysisService, which allows us to manage calls to different APIs seamlessly.

]]>
<![CDATA[ Blast Off with Blazor: Prerender a Blazor Web Assembly application ]]> https://www.daveabrock.com/2020/12/27/blast-off-blazor-prerender-wasm/ 608c3e3df4327a003ba2fe7f Sat, 26 Dec 2020 18:00:00 -0600 So far in our series, we’ve walked through the intro, wrote our first component, dynamically updated the HTML head from a component, isolated our service dependencies, worked on hosting our images over Azure Blob Storage and Cosmos DB, and built a responsive image gallery.

I did some housekeeping with the code that I won’t mention here, such as renaming projects. Check out the GitHub repo for the latest.

Now, we need to discuss how to mitigate the biggest drawback of using Blazor Web Assembly: the initial load time. While the Web Assembly solution allows us to enjoy a serverless deployment scenario and fast interactions, the initial download can take awhile. Users need to sit around while the browser downloads the .NET runtime and all your project’s DLLs.

While the ASP.NET Core team has made a lot of progress in improving the .NET IL linker and making framework libraries more linkable, the initial startup time of Blazor Web Assembly apps is still quite noticeable. In my case, an initial, uncached download takes 7 seconds. With prerendering, I was able to cut this down considerably. That is the focus of this post.

With ahead-of-time (AoT) compilation coming with .NET 6, you might be inclined to treat it as a fix to this problem. AoT will not help decrease the app download size because Web Assembly uses an Intermediate Language (IL) interpreter and isn’t a JIT-based runtime. In terms of performance, AoT can speed up runtime performance but will mean an even larger download size. See Dan Roth’s comment for details.

This post covers the following content.

This post builds off the wonderful content from Chris Sainty, Jon Hilton, and the ASP.NET Core documentation.

What is prerendering?

Without prerendering, your Blazor Web Assembly app parses the index.html file it receives, then fetches all your dependencies. Then, it’ll load and launch your application—leaving you with a less-than-ideal UX as users are sitting and waiting. With prerendering, the initial load occurs on the server with the download occurring in the background. By the time the page quickly loads and the download completes, your users are ready to interact with your site—with a big Blazor Web Assembly drawback out of the way.

When we say prerendering, we really mean prerendering components. We do this by integrating our app using Razor Pages. The cost here: you’re kissing goodbye to a simple static file deployment in favor of faster startup times.

This is not simply converting your app to a Blazor Server solution. While Blazor Server executes your application from a server-side ASP.NET Core app, this is compiling your code and serving static assets from a server project. You’re still running your app in a browser as you were before.

To get started, we need to create and configure a Server project.

Configure the BlastOff.Server project

To get started, you’ll need to create a new BlastOff.Server project and add it to the solution. The easiest and simplest way is using the dotnet CLI.

To create a new project, I executed the following from the root of the project (where the solution file is):

dotnet new web -o BlastOff.Server

Then, I added it to my existing solution:

dotnet sln add BlastOff.Server/BlastOff.Server.csproj

In the new BlastOff.Server project, I headed over to Startup.cs and updated the Configure method to the following:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    public void Configure(IApplicationBuilder app)
    {
        // some code removed for brevity

        app.UseBlazorFrameworkFiles();
        app.UseStaticFiles();

        app.UseRouting();

        app.UseEndpoints(endpoints =>
        {
            endpoints.MapRazorPages();
            endpoints.MapControllers();
            endpoints.MapFallbackToPage("/_Host");
        });
    }
}

With the IApplicationBuilder interface, we’ll call UseBlazorFrameworkFiles() that will allow us to serve Blazor Web Assembly files from a specified path. The UseStaticFiles() call enables us to serve static files.

After enabling routing, we use the endpoint middleware to work with Razor Pages, which is the secret sauce behind this magic. Then, in MapFallbackToPage(), we’re serving a /_Host.cshtml file in the Server project’s Pages directory.

This _Host.cshtml file will replace our Client project’s index.html file. Therefore, we can copy the contents of our index.html here.

You’ll notice below that we:

  • Set the namespace to BlastOff.Server.Pages
  • Set a @using directive to the BlastOff.Client project
  • Make sure the <link href> references resolve correctly
  • Update the Component Tag Helper to include a render-mode of WebAssemblyPrerendered. This option prerenders the component into static HTML and includes a flag for the app to make the component interactive when the browser loads it
  • Ensure the Blazor script points to the client script at _framework/blazor.webassembly.js

Here’s the _Host.cshtml file in its entirety:

@page "/"
@namespace BlastOff.Server.Pages
@using BlastOff.Client

<html>
<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Blast Off With Blazor</title>
    <base href="~/" />
    <link rel="stylesheet" href="css/bootstrap/bootstrap.min.css" />
    <link href="css/site.css" rel="stylesheet" />
</head>
<body>
    <app>
        <component type="typeof(App)" render-mode="WebAssemblyPrerendered" />
    </app>

    <script src="_framework/blazor.webassembly.js"></script>
</body>
</html>

With this in place, I headed over to the BlastOff.Client project to delete wwwroot/index.html and this line from Program.Main. It felt both nice and weird.

builder.RootComponents.Add<App>("#app");

Share the service layer

With prerendering, we need to think about how our calls to Azure Functions work. Previously, our API calls were from the BlastOff.Client project. With prerendering, it’ll first need to make these calls from the server. As a result, it’ll make better sense to move our service layer out of the BlastOff.Client project and to a new BlastOff.Shared project.

Here’s our ImageService now (I also moved the Image model here for good measure):

using Microsoft.Extensions.Logging;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Json;
using System.Threading.Tasks;

namespace BlastOff.Shared
{
    public interface IImageService
    {
        public Task<IEnumerable<Image>> GetImages(int days);
    }

    public class ImageService : IImageService
    {
        readonly IHttpClientFactory _clientFactory;
        readonly ILogger<ImageService> _logger;

        public ImageService(ILogger<ImageService> logger, IHttpClientFactory clientFactory)
        {
            _clientFactory = clientFactory;
            _logger = logger;
        }

        public async Task<IEnumerable<Image>> GetImages(int days)
        {
            try
            {
                var client = _clientFactory.CreateClient("imageofday");
                var images = await client.GetFromJsonAsync
                    <IEnumerable<Image>>($"api/image?days={days}");
                return images.OrderByDescending(img => img.Date);
            }
            catch (Exception ex)
            {
                _logger.LogError(ex.Message, ex);
            }

            return default;
        }
    }
}

We can thank ourselves for isolating our service dependency a few posts ago–turning a potential headache to some copying and pasting. The only cleanup in our BlastOff.Client project is to bring in the BlastOff.Shared project to reference our service in Program.cs:

using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Components.WebAssembly.Hosting;
using Microsoft.Extensions.DependencyInjection;
using BlastOff.Shared;

namespace BlastOff.Client
{
    public class Program
    {
        public static async Task Main(string[] args)
        {
            var builder = WebAssemblyHostBuilder.CreateDefault(args);
            builder.Services.AddHttpClient("imageofday", iod =>
            {
                iod.BaseAddress = new Uri("http://localhost:7071" ?? builder.HostEnvironment.BaseAddress);
            });
            builder.Services.AddScoped<IImageService, ImageService>();

            await builder.Build().RunAsync();
        }
    }
}

If we run the app, it loads much quicker. When we browse to /images it loads fast too. But if we hit reload, we’ll see a nasty error because we didn’t register the ImageService in our Server project. In the DI container logic in Startup.ConfigureServices, then, add the HttpClient and ImageService references:

public void ConfigureServices(IServiceCollection services)
{
    services.AddRazorPages();

    services.AddHttpClient("imageofday", iod =>
    {
        iod.BaseAddress = new Uri("http://localhost:7071");
    });

    services.AddScoped<IImageService, ImageService>();
}

Check our progress

Check out how much better everything looks now. Much better.

Our slow site

Wrap up

In this post, we worked through how to prerender our Blazor Web Assembly application. We created a new server project, shared our dependencies in a new project, and understood how prerendering can significantly improve initial load times.

References

]]>
<![CDATA[ The .NET Stacks #30: 🥂 See ya, 2020 ]]> https://www.daveabrock.com/2020/12/19/dotnet-stacks-30/ 608c3e3df4327a003ba2fe7e Fri, 18 Dec 2020 18:00:00 -0600 Note: This is the published version of my free, weekly newsletter, The .NET Stacks. It was originally sent to subscribers on December 14, 2020. Subscribe at the bottom of this post to get the content right away!

👋 Happy Monday, everyone! This is the very last issue of 2020. Every year, I try to take the last few weeks off to decompress and unplug—considering how 2020 went, this year is no exception.

📰 News and announcements

GitHub Universe 2020 occurred this week, and Shanku Niyogi has all the details. (As .NET developers, the gap between .NET and GitHub will continue to narrow as Microsoft will pivot to GitHub Actions as its long-term CI/CD solution.)

The GitHub site itself stole the show. Dark Mode is now supported natively—you can delete your favorite browser extensions that do this for you. Also, the front page is a site to behold. Just hover over a globe to see commits in real-time from across the world:

The GitHub globe

As for actual development features, we now see the ability to auto-merge PRs, discussions, CD support. Also, companies can now invest in OSS with the GitHub Sponsors program.

What does this Sponsors news mean for the .NET OSS ecosystem? That is a complicated question. .NET PM Immo Landwerth wrote about it:

Today, we’re usually reactive when it comes to library & framework investments. By the time we know there is a need for a library we routinely research existing options but usually end up rolling our own, because nothing fits the bill as-is and we either don’t have the time or we believe we wouldn’t be able to successfully influence the design of the existing library. This results in a perception where Microsoft “sucks the air” out of the OSS ecosystem because our solutions are usually more promoted and often tightly integrated into the platform, thus rendering existing solutions less attractive. Several maintainers have cited this as a reason that they gave up or avoid building libraries that seem foundational.
To avoid this, we need to start engaging with owners of existing libraries and work with them to increase their quality … it should become a general practice. We should even consider an ongoing work item to actively investigate widely used libraries and help them raise the quality or tighten their integration into the .NET developer experience.

Check out the full discussion here:


Claire Novotny wrote two posts this week about using Source Link with Visual Studio and Visual Studio Code. Source Link has been around a few years, but is now getting the full treatment. If you aren’t familiar, it’s “a language and source-control agnostic system for providing first-class source debugging experiences for binaries.” This helps solve issues when you can’t properly debug an external library (or a .NET library).

Considering most library code is openly available from a GitHub HTTP request, debugging external dependencies is a lot easier these days. If a library has enabled Source Link, you can work with the code as you would with yours: setting breakpoints, checking values, and so on. I can see benefit personally in something like System.Text.Json or Newtonsoft.Json—it’ll be easier to pinpoint any silly serialization issues I may or may not be causing.

Claire’s first post shows off the value of using Source Link, but for this to work a library’s PDB files need to be properly indexed. Her second post shows how to add Source Link to your projects.


Speaking of Visual Studio, Jacqueline Widdis rolls out Visual Studio 2019 v16.9 Preview 2. We’re seeing a few quality-of-life improvements—using directives are automatically inserted when pasting types, and VS automatically adds semicolons when creating objects and invoking methods.

Speaking of quality-of-life, the Visual Studio team is aware of the underwhelming reception to the Git integration updates added with v16.8. They have a survey out to understand why a lot of folks are turning off the feature. They’re listening, so fill out the survey to have your voice heard.


Xin Shi discusses Infer#, which brings you interprocedural static analysis capabilities to .NET. It currently helps you detect null dereferences and resource leaks. They’re working on race condition detection and thread safety violations.


Igor Velikorossov announced what’s new in the Windows Forms runtime for .NET 5. They include a new TaskDialog control, ListView enhancements, upgrades to FileDialog, and performance and accessibility improvements.


Anthony Chu announces that .NET 5 support in Azure Functions is now in early preview. For this to happen, .NET 5 functions need to run in an out-of-process language worker separate from the Azure Functions runtime. As a result, a .NET 5 app runs differently than a 3.1 one; you build an executable that imports the .NET 5 language worker as a NuGet package. Check out the repository readme for the full details. The .NET 5 worker will be generally available in early 2021.

In other Azure news, Richard Park announced the new Azure Service Bus client libraries. Following the guidelines for new Azure SDKs, it’s a culmination of months of work and the post includes details on how its making working with core Service Bus functionality quite a bit easier.

Also, the November 2020 release of the Azure SDKs include the Service Bus updates, as well as updates for Form Recognizer, Identity, and Text Analytics.


Microsoft released a couple new Learn modules: one on building projects with GitHub, and another with an introduction to PowerShell.


Kubernetes 1.20 has been released. The release notes are very infra-heavy, but one thing to note is the Dockershim deprecation, which we talked about last week.


Okta has released a new CLI. With one okta start command, it looks like it can register you for a new account (if you don’t have one) and work with a sample ASP.NET Core MVC application.

Dev tip: nested tuple deconstruction

I’m a fan of deconstructing C# objects using tuples. Did you know you can nest them? Even though it’s been around awhile, I didn’t.

David Pine shows us how:

📆 A 2020 .NET Stacks recap

It’s been a busy 2020 at The .NET Stacks, even if we’ve only been around since the end of May. What’s happened since then? Let’s recap.

The first issue covered Microsoft Build. We talked about Project Tye and YARP. We discussed native .NET feature flags and local k8s dev in Visual Studio, EF Core, gRPC-Web and C# 9, Blazor Mobile Bindings, Azure SDKs and test automation, and how C# 9 has become more functional. Then, we discussed Project Coyote and a new Razor editor, .NET’s approachability, Newtonsoft’s new rule and using async void, and what the future of Azure DevOps looks like.

Then, we covered .NET 5 Blazor improvements, NuGet changes and many-to-many in EF Core 5, C# source generators, app trimming, Blazor CSS isolation, the fate of .NET Standard, and a Microsoft Ignite recap. We introduced route-to-code and talked about what’s happening with IdentityServer, talked about Azure Static Web Apps, got excited about .NET 5 and the .NET Foundation, and talked about how .NET 5 support works.

After that, we talked about Blazor’s production readiness, C# 9 records, and celebrated the official release of .NET 5. We gave some love to ASP.NET Core 5, talked about the future of APIs in ASP.NET Core MVC, and talked about how to switch controllers to route-to-code. And now, we’re here.

Along the way, we’ve had some wonderful interviews from community leaders. We discussed the PresenceLight project, writing about ASP.NET Core from A-Z, the Azure community, Entity Framework Core, the Microsoft docs, ML.NET, F# and functional programming (from Microsoft and the community), and Coravel. I’ve got interviews coming from LaBrina Loving, Layla Porter, Cecil Phillip, and Steve Sanderson—and welcome any suggestions for future interview subjects.

I started this newsletter as an experiment to keep my mind sharp during the pandemic, and I’m happy to see how much its grown. I hope you enjoy it, but I’m always open to feedback about how I can make it better for you. I’ll have a survey in early 2021, but if you have anything to share don’t hesitate to reach out.

See you in 2021! I hope you and your family have a safe and relaxing holiday season.

🌎 Last week in the .NET world

🔥 The Top 3

📅 Community and events

🌎 Web development

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos


Go home
]]>
<![CDATA[ Blast Off with Blazor: Build a responsive image gallery ]]> https://www.daveabrock.com/2020/12/16/blast-off-blazor-responsive-gallery/ 608c3e3df4327a003ba2fe7d Tue, 15 Dec 2020 18:00:00 -0600 So far in our series, we’ve walked through the intro, wrote our first component, dynamically updated the HTML head from a component, isolated our service dependencies, and worked on hosting our images over Azure Blob Storage and Cosmos DB.

Now, we’re going to query Cosmos DB, fetch our images, and display them in a responsive image gallery. We’ll learn how to reuse components and pass parameters to them.

After we work on this, we’ll enhance the gallery in future posts, with:

  • Enabling the “infinite scrolling” feature with Blazor virtualization
  • Filtering and querying images
  • Creating a dialog to see a larger image and other details

This post contains the following content.

A quick primer

If you haven’t been with me for the whole series, we’re building a Blazor Web Assembly app hosted with Azure Static Web Apps at blastoffwithblazor.com. I’ve copied images from the NASA APOD API (all 25 years!) to Azure Blob Storage, and are storing the image metadata in a serverless Cosmos DB instance. Feel free to read those links to learn more.

With the images in place, we’re going to build the image gallery. It’s responsive and will look good on devices of any size.

Our slow site

All code is on GitHub.

Customize the service layer

In previous posts, to get up and running we fetched a random image. To make things more interesting, we’re going to fetch images from the last 90 days. (In future posts, we’ll work on infinite scrolling and searching and filtering.) This requires updates to our Azure Function. We’ll ask for a days query string parameter that allows the caller to request up to 90 days of images. For example, if we call api/assets/img?days=90, we get images from the last 90 days.

I’ve added logic to verify and grab the days, make sure it’s in the appropriate range, then query Cosmos for the data itself.

using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Microsoft.Azure.CosmosRepository;
using Data;
using System.Transactions;
using System.Collections.Generic;

namespace Api
{
    public class ImageGet
    {
        readonly IRepository<Image> _imageRepository;

        public ImageGet(IRepository<Image> imageRepository) => _imageRepository = imageRepository;

        [FunctionName("ImageGet")]
        public IActionResult Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "image")] HttpRequest req,
            ILogger log)
        {

            bool hasDays = int.TryParse(req.Query["days"], out int days);
            log.LogInformation($"Requested images from last {days} days.");

            if (!hasDays && (days <= 1 || days > 90))
                return new BadRequestResult();

            ValueTask<IEnumerable<Image>> imageResponse;
            imageResponse = _imageRepository.GetAsync
                 (img => img.Date > DateTime.Now.AddDays(-days));

            return new OkObjectResult(imageResponse.Result);
        }
    }
}

In the ApiClientService class in the Client project, update the call to take in the days. We’ll also order the images descending (newest to oldest):

public async Task<IEnumerable<Image>> GetImageOfDay(int days)
{
    try
    {
        var client = _clientFactory.CreateClient("imageofday");
        var images = await client.GetFromJsonAsync
                <IEnumerable<Image>>($"api/image?days={days}");
                return images.OrderByDescending(img => img.Date);
    }
    catch (Exception ex)
    {
        _logger.LogError(ex.Message, ex);
    }

     return null;
}

Now, in the code for the Images component, at Images.razor.cs, change the call to pass in the days:

protected override async Task OnInitializedAsync()
{
    _images = await ApiClientService.GetImageOfDay(days: 90);
}

Update Images component to list our image collection

So, how should we lay out our images? I’d like to list them left-to-right, top-to-bottom. Luckily, I can use CSS grid layouts. We can define how we want to lay out our rows and columns, and grid can handle how they render when the user’s window size is at different dimensions.

Using Tailwind CSS, I’m going to add a little bit of padding. Then, I’ll have one column on small devices, two columns on medium devices, and three columns on large and extra-large devices.

<div class="p-2 grid grid-cols-1 sm:grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-3">
</div>

In between the <div>’s, we’ll iterate through our images and display them. We could handle the rendering here, but that’s asking for maintenance headaches and won’t give you any reusability options. The advantage of Blazor is in its component model. Let’s build a reusable component.

We can pass parameters to our components, and here we’ll want to pass down our Image model. Here’s how we’ll use the new component from the Images page, then:

<div class="p-2 grid grid-cols-1 sm:grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-3">
    @foreach (var image in _images)
    {
        <ImageCard ImageDetails="image" />
    }
 </div>

Create a reusable ImageCard component

Now, in Pages, create two files: ImageCard.razor, and ImageCard.razor.cs. In the .cs file, use the [Parameter] attribute to pass down the Image. (We’ll likely add much more to this component, so are writing a partial class.)

using Microsoft.AspNetCore.Components;

namespace Client.Pages
{
    partial class ImageCard : ComponentBase
    {
        [Parameter]
        public Data.Image ImageDetails { get; set; }
    }
}

How will our new component look? As you can imagine, a lot of the work is in designing the layout. There’s a lot of different sized images, and I spent a lot of time getting it to work. Even though we aren’t going deep on Tailwind CSS in these posts, it’s worth mentioning here. (Also, much thanks to Khalid Abuhakmeh for lending a hand.)

In the outer <div>, we’re giving the card a large shadow and some margin:

<div class="m-6 rounded overflow-hidden shadow-lg">
</div>

Then we’ll render our image, and assign the alternate tag to the title. It’s all in our model. We’re saying here it’ll fill the entire width of the card, only reach a specified height, and use object-cover, which uses the cover value from the object-fit CSS property. It maintains an image’s aspect ratio as an image is resized.

<div class="m-6 rounded overflow-hidden shadow-lg">
    <img class="w-full h-48 object-cover" src="@ImageDetails.Url" alt="@ImageDetails.Title" />
</div>

In the rest of the markup, we do the following:

  • We have an IsNew property on our model that we use to apply a New badge if an image is newer than three days old.
  • If so, we give it a teal background and darker teal text, and apply some effects to it. We use flexbox to align it appropriately. If it isn’t new, we make sure things are aligned appropriately without the badge.
  • Finally, we display the Title. We use the truncate property, which gives long titles ... at the end. This way, the cards don’t align differently depending on how many lines of text the title consumes.
<div class="m-6 rounded overflow-hidden shadow-lg">
        <img class="w-full h-48 object-cover" src="@ImageDetails.Url" alt="@ImageDetails.Title" />
        <div class="p-6">
            <div class="flex items-baseline">
                @if (ImageDetails.IsNew)
                {
                    <div class="flex items-baseline">
                        <span class="inline-block bg-teal-200 text-teal-800 text-xs px-2 rounded-full
                      uppercase font-semibold tracking-wide">New</span>
                        <div class="ml-2 text-gray-600 text-md-left uppercase font-semibold tracking-wide">
                            @ImageDetails.Date.ToLongDateString()
                        </div>
                    </div>
                }
                else
                {
                    <div class="text-gray-600 text-md-left uppercase font-semibold tracking-wide">
                        @ImageDetails.Date.ToLongDateString()
                    </div>
                }

            </div>
            <h3 class="mt-1 font-semibold text-2xl leading-tight truncate">@ImageDetails.Title</h3>
        </div>
</div>

Here’s how a card looks with the New badge:

The card with a new badge

And again, here’s how the page looks. Check it out live at blastoffwithblazor.com as well!

Our slow site

For comparison, here’s how it looks on an iPhone X.

Our slow site

Wrap up

In this post, we showed off how to query images from our Cosmos DB and display our images using a responsive layout. Along the way, we learned how to pass parameters to reusable components.

In future posts, we’ll work on infinite scrolling, clicking an image for more details, and querying and searching the data.

]]>
<![CDATA[ Blast Off with Blazor: Integrate Cosmos DB with Blazor WebAssembly ]]> https://www.daveabrock.com/2020/12/13/blast-off-blazor-cosmos/ 608c3e3df4327a003ba2fe7c Sat, 12 Dec 2020 18:00:00 -0600 So far in our series, we’ve walked through the intro, wrote our first component, dynamically updated the HTML head from a component, and isolated our service dependencies.

It’s time to address the elephant in the room—why is the image loading so slow?

Our slow site

There’s a few reasons for that. First, we have to wait for the app to load when we refresh the page–and with Blazor WebAssembly, we’re waiting for the .NET runtime to load. On top of that, we’re calling off to a REST API, getting the image source, and sending that to our view. That’s not incredibly efficient.

In this post, we’re going to correct both issues. We’ll first move the Image component to its own page, then we’re going to use a persistence layer to store and work with our images. This includes hosting our images on Azure Storage and accessing its details using the Azure Cosmos DB serverless offering. This will only help us as we’ll create components to search on and filter our data.

This post contains the following content.

Move our Image component to its own page

To move our Image component away from the default Index view, rename your Index.razor and Index.razor.cs files to Image.razor and Image.razor.cs.

In the Image.razor file, change the route from @page "/" to @page "/image". That keeps it as a routable component, meaning it’ll render whenever we browse to /image.

Then, in Image.razor.cs, make sure to rename the partial class to Image.

Here’s how Image.razor.cs looks now:

using Client.Services;
using Microsoft.AspNetCore.Components;
using System;
using System.Threading.Tasks;

namespace Client.Pages
{
    partial class Image : ComponentBase
    {
        Data.Image _image;

        [Inject]
        public IApiClientService ApiClientService { get; set; }

        private static string FormatDate(DateTime date) => date.ToLongDateString();

        protected override async Task OnInitializedAsync()
        {
            _image = await ApiClientService.GetImageOfDay();
        }
    }
}

Create a new Home component

With that in place, let’s create a new Home component. Right now, it’ll welcome users to the site and point them to our images component. (If you need a refresher on how the NavigationManager works, check out my previous post on the topic.)

@page "/"
@inject NavigationManager Navigator

<div class="flex justify-center">
    <div class="max-w-md rounded overflow-hidden shadow-lg m-12">
        <h1 class="text-4xl m-6">Welcome to Blast Off with Blazor</h1>
        <img class="w-full" src="images/armstrong.jpg" />

        <p class="m-4">
            This is a project to sample various Blazor features and functionality.

            We'll have more soon, but right now we are fetching random images.
        </p>
        <button class="text-center m-4 bg-red-500 hover:bg-red-700 text-white font-bold py-2 px-4 rounded"
                @onclick="ToImagePage">
            🚀 Image of the Day
        </button>
    </div>
</div>

@code {
    void ToImagePage() => Navigator.NavigateTo("/image");
} 

Here’s how the Home component looks now.

Our new index component

Integrate Cosmos DB with our application

With our first fix out of the way, it’s now time to speed up our image loading time. I’m going to do this in two ways:

  • Store the images statically in Azure Storage
  • Store image metadata, including the Azure Storage URLs in Cosmos DB

In the past, Cosmos has been incredibly expensive and wouldn’t have been worth the cost for this project. With a new serverless offering (now in preview), it’s a lot more manageable and can easily be run under my monthly Azure credits. While Cosmos excels with intensive, globally-distributed workloads, I’m after a fully-managed NoSQL offering that’ll allow me flexibility if my schema needs change.

In this post, I won’t show you how to create a Cosmos instance, upload our images to Azure Storage, then create a link between the two. This is all documented in a recent post. After I have the data set up, I need to understand how to access it from my application. That’s what we’ll cover.

Now, I could use the Azure Cosmos DB C# client to work with Cosmos. There’s a lot of complexities here, and I don’t need any of that business. I need it for basic CRUD operations. I’m a fan of David Pine’s Azure Cosmos DB Repository .NET SDK, and will be using it here. This allows me to maintain the abstraction layer between the API and the client application, and is super easy to work with.

Update the API

After adding the NuGet package to my Api and Data projects, I can start to configure it. There’s a few different ways to wire up your Cosmos details—check out the readme for details—I’ll use the Startup.

Here’s the Configure method for my Azure Function in the Api project:

public override void Configure(IFunctionsHostBuilder builder)
{
    builder.Services.AddCosmosRepository(
        options =>
        {
            options.CosmosConnectionString = "my-connection-string";
            options.ContainerId = "image";
            options.DatabaseId = "APODImages";
        });
};

Next, I’ll need to make some changes to the model. Take a look and I’ll describe after:

using Microsoft.Azure.CosmosRepository;
using Newtonsoft.Json;
using System;

namespace Data
{
    public class Image : Item
    {
        [JsonProperty("title")]
        public string Title { get; set; }

        [JsonProperty("copyright")]
        public string Copyright { get; set; }

        [JsonProperty("date")]
        public DateTime Date { get; set; }

        [JsonProperty("explanation")]
        public string Explanation { get; set; }

        [JsonProperty("url")]
        public string Url { get; set; }
    }
}

You’ll see that the model now inherits from Item, which is required by the project. It contains an Id, a Type (for filtering implicitly on your behalf), and a partition key property. I’m also using JsonProperty attributes to match up with the Cosmos fields. Down the line, I might split models between my app and my API but this should work for now.

Now, in ImageGet.cs, I can call off to Cosmos quite easily. Here I’m calling off to my Cosmos instance. I can query by a random date.

using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Microsoft.Azure.CosmosRepository;
using Data;

namespace Api
{
    public class ImageGet
    {
        readonly IRepository<Image> _imageRepository;

        public ImageGet(IRepository<Image> imageRepository) => _imageRepository = imageRepository;

        [FunctionName("ImageGet")]
        public async Task<IActionResult> Run(
            [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "image")] HttpRequest req,
            ILogger log)
        {
            var imageResponse = _imageRepository.GetAsync
                (img => img.Date == GetRandomDate());
            return new OkObjectResult(imageResponse.Result);
        }

        private static DateTime GetRandomDate()
        {
            var random = new Random();
            var startDate = new DateTime(1995, 06, 16);
            var range = (DateTime.Today - startDate).Days;
            return startDate.AddDays(random.Next(range));
        }
    }
}

Update the client app

In our ApiClientService, I’ll need to slightly modify my GetImageOfDay method. The call returns an IEnumerable<Image>, so I’ll just grab the result.

public async Task<Image> GetImageOfDay()
{
    try
    {
        var client = _clientFactory.CreateClient("imageofday");
        var image = await client.GetFromJsonAsync<IEnumerable<Image>>("api/image");
        return image.First();
    }
    catch (Exception ex)
    {
        _logger.LogError(ex.Message, ex);
    }

    return null;
}

The images are now loading much faster!

Update our tests

Thanks to successfully isolating our service last time, the tests for the Image component don’t need much fixing. Simply changing the component to Image instead of Index does the trick:

[Fact]
public void ImageOfDayComponentRendersCorrectly()
{
    var mockClient = new Mock<IApiClientService>();
    mockClient.Setup(i => i.GetImageOfDay()).ReturnsAsync(GetImage());

    using var ctx = new TestContext();
    ctx.Services.AddSingleton(mockClient.Object);

    IRenderedComponent<Client.Pages.Image> cut = ctx.RenderComponent<Client.Pages.Image>();
    var h1Element = cut.Find("h1").TextContent;
    var imgElement = cut.Find("img");
    var pElement = cut.Find("p");

    h1Element.MarkupMatches("My Sample Image");
    imgElement.MarkupMatches(@"<img src=""https://nasa.gov"" 
        class=""rounded-lg h-500 w-500 flex items-center justify-center"">");
    pElement.MarkupMatches(@"<p class=""text-2xl"">Wednesday, January 1, 2020</p>");
}

We can also add a quick test to our Home component in a new HomeTest file, which is similar to how we did our NotFound component:

[Fact]
public void IndexComponentRendersCorrectly()
{
    using var ctx = new TestContext();
    var cut = ctx.RenderComponent<Home>();
    var h1Element = cut.Find("h1").TextContent;
    var buttonElement = cut.Find("button").TextContent;

    h1Element.MarkupMatches("Welcome to Blast Off with Blazor");
    buttonElement.MarkupMatches("🚀 Image of the Day");
}

Wrap up

In this post, we worked on speeding up the loading of our images. We first moved our Image component off the home page, then integrated Cosmos DB into our application. Finally, we cleaned up our tests.

]]>
<![CDATA[ The .NET Stacks #29: More on route-to-code and some Kubernetes news ]]> https://www.daveabrock.com/2020/12/12/dotnet-stacks-29/ 608c3e3df4327a003ba2fe7b Fri, 11 Dec 2020 18:00:00 -0600 Note: This is the published version of my free, weekly newsletter, The .NET Stacks. It was originally sent to subscribers on December 7, 2020. Subscribe at the bottom of this post to get the content right away!

Happy Monday! Here’s what we’re talking about this week:

  • Digging deeper on “route-to-code”
  • Kubernetes is deprecating Docker … what?
  • Last week in the .NET world

🔭 Digging deeper on “route-to-code”

Last week, I talked about the future of writing APIs for ASP.NET Core MVC. The gist: there’s a new initiative (Project Houdini) coming to move MVC productivity features to the core of the stack, and part of that is generating imperative APIs for you at compile time using source generation.

This leverages a way to write slim APIs in ASP.NET Core without the bloat of the MVC framework: it’s called “route-to-code.” We talked about it in early October. I thought it’d be fun to migrate a simple MVC CRUD API to this model, and I wrote about it this week.

As I wrote, this isn’t meant to be an MVC replacement, but a solution for simple JSON APIs. It does not support model binding or validation, content negotiation, or dependency injection from constructors. Most times, though, you’re wanting to separate business logic from your execution context—it’s definitely worth a look.

Here’s me using an in-memory Entity Framework Core database to get some bands:

endpoints.MapGet("/bands", async context =>
{
    var repository = context.RequestServices.GetService<SampleContext>();
    var bands = repository.Bands.ToListAsync();
    await context.Response.WriteAsJsonAsync(bands);
});

There’s no framework here, so instead of using DI to access my EF context, I get a service through the HttpContext. Then, I can use helper methods that let me read from and write to my pipe. Pretty slick.

Here’s me getting a record by ID:

endpoints.MapGet("/bands/{id}", async context =>
{
    var repository = context.RequestServices.GetService<SampleContext>();
    var id = context.Request.RouteValues["id"];
    var band = await repository.Bands.FindAsync(Convert.ToInt32(id));

    if (band is null)
    {
        context.Response.StatusCode = StatusCodes.Status404NotFound;
        return;
    }
    await context.Response.WriteAsJsonAsync(band);
});

The simplicity comes with a cost: it’s all very manual. I even have to convert the ID to an integer myself (not a big deal, admittedly).

How does a POST request work? I can check to see if the request is asking for JSON. With no framework or filters, my error checking is setting a status code and returning early. (I can abstract this out, obviously. It took awhile to get used to not having a framework to lean on.)

endpoints.MapPost("/bands", async context =>
{
    var repository = context.RequestServices.GetService<SampleContext>();

    if (!context.Request.HasJsonContentType())
    {
        context.Response.StatusCode = StatusCodes.Status415UnsupportedMediaType;
        return;
    }

    var band = await context.Request.ReadFromJsonAsync<Band>();
    await repository.SaveChangesAsync();
    await context.Response.WriteAsJsonAsync(band);
});

In the doc, Microsoft will be the first to tell you it’s for the simplest scenarios. It’ll be interesting to see what improvements come: will mimicking DI become easier? I hope so.

🤯 Kubernetes is deprecating Docker … what?

I know this is a .NET development newsletter, but these days you probably need at least a passing knowledge of containerization. To that end: this week, you may have heard something along the lines of “Kubernetes is deprecating Docker.” It sounds concerning, but Kubernetes says you probably shouldn’t worry and Docker is saying the same.

Still, it’s true: Kubernetes is deprecating Docker as a container runtime after v1.20—currently planned for late 2021.

From a high level, I think Google’s Kelsey Hightower summed it up best:

Think of it like this – Docker refactored its code base, broke up the monolith, and created a new service, containerd, which both Kubernetes and Docker now use for running containers.

Docker isn’t a magical “make me a container” button—it’s an entire tech stack. Inside of that is a container runtime, containerd. It contains a lot of bells and whistles for us when doing development work, but k8s doesn’t need it because it isn’t a person. (If it were, I’d like to have a chat.)

For k8s to get through this abstraction layer, it needs to use the Dockershim tool to get to containerd—yet another maintenance headache. Kubelets are removing Dockershims at the end of 2021, which removes Docker support. When this change comes, you just need to change your container runtime from Docker to another supported runtime.

Because this addresses a different environment than most folks use with Docker, it shouldn’t matter—the install you’re using in dev is typically different than the runtime in your k8s cluster. This change would largely impact k8s administrators and not developers.

Hopefully this clears up some potential confusion. We don’t talk about Docker and Kubernetes often, but this was too important not to discuss. (I could hardly contain myself.)

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 ASP.NET Core / Blazor

🚀 .NET 5

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

👍 Design, architecture and best practices

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Use local function attributes with C# 9 ]]> https://www.daveabrock.com/2020/12/09/local-function-attributes-c-sharp-9/ 608c3e3df4327a003ba2fe7a Tue, 08 Dec 2020 18:00:00 -0600 If you look at what’s new in C# 9, you’ll see records, init-only properties, and top-level statements get all the glory. And that’s fine, because they’re great. At the end of the bulleted list, I noticed support for local attributes on local functions.

When it comes to local functions, with C# 9 you can apply attributes to function declarations, parameters, and type parameters.

The most common use case I’ve seen is using the ConditionalAttribute. This attribute is used for checking conditional compilation symbols—instead of using #define DEBUG regions, for example, you can do it here. You can declare multiple attributes, just like for a “normal” method. For example:

static void Main(string[] args)
{
    [Conditional("DEBUG")]
    [Conditional("BETA")]
    static void DoSuperSecretStuff()
    {
        Console.WriteLine("This only is executed in debug or beta mode.");
    }

    DoSuperSecretStuff();
}

If you want to work with the ConditionalAttribute the methods must be both static and void.

Of course, you can work with any kind of attributes. In the GruutBot project (which I showcased here), there’s a local function that has a switch expression to determine Gruut’s emotion:

static string GetReplyText(TextSentiment sentiment) => sentiment switch
{
    TextSentiment.Positive => "I am Gruut.",
    TextSentiment.Negative => "I AM GRUUUUUTTT!!",
    TextSentiment.Neutral => "I am Gruut?",
    _ => "I. AM. GRUUUUUT"
};

Here I can go wild with a few more attributes:

[Obsolete]
[MethodImpl(MethodImplOptions.AggressiveOptimization)]
static string GetReplyText(TextSentiment sentiment) => sentiment switch
{
    TextSentiment.Positive => "I am Gruut.",
    TextSentiment.Negative => "I AM GRUUUUUTTT!!",
    TextSentiment.Neutral => "I am Gruut?",
    _ => "I. AM. GRUUUUUT"
};

The ObsoleteAttribute signifies to callers that the function is no longer use (and will trigger a warning when invoked).

As for the MethodImplAttribute, it allows me to specify AggressiveOptimization. The details are a topic for another post, but from a high level: this method is JIT’ed when first called, and the JIT’ed code is always optimized—making it ineligible for tiered compilation.

This new local function support brings attributes to this scope, which is a welcome addition. These aren’t “new” attributes, but just the ability to do it in local functions now.

Wrap up

In this post, I quickly showed you how to use local function attributes in C# 9. If you have any suggestions, please let me know!

]]>
<![CDATA[ Automate a Markdown links page with Pinboard and C# ]]> https://www.daveabrock.com/2020/12/07/make-link-list-with-markdown-and-with-c-sharp/ 608c3e3df4327a003ba2fe79 Sun, 06 Dec 2020 18:00:00 -0600 This is my contribution for the C# Advent Calendar, a collection of awesome C# posts throughout December. Check it out!

I run a weekly newsletter called The .NET Stacks. If you’ve read this blog for any period of time you know this, thanks to my shameless plugs. (Feel free to subscribe!) I love it and enjoy writing it every week, but it can take up a lot of my time.

The biggest time spent is generating all the great community links. Luckily, I found a way to automate this process. I can click a button, and a console app writes all my links to my Markdown file. Then, all I have to do is fill in the rest of the newsletter.

It’s down to a two-step process, really: throughout the week, I add links to my Pinboard, then my app writes them to a Markdown file.

This post will show off how you can populate a Markdown links page using Pinboard and C#.

The act of getting links has to be somewhat manual. I could say “whenever Person X posts, save it” but what if this person goes nuts and writes a post criticizing Santa Claus? My links are curated—while I don’t agree with everything I share, I do want them to be relevant and useful. After getting links from my various RSS feeds and link aggregators, I can start the automation.

Where do I store my bookmarks? I’m a big fan of Pinboard. For not even $2 a month, it’s a no frills, fast, and secure way to save bookmarks. No ads, no tracking, and a great feature set. And it comes with an API! I knew this would save me hours a month and it makes the investment well worth it.

After exploring the API docs, I found a hacky—but useful!—way to save me loads of time.

A link in Pinboard

These are all retrievable fields from the API. The title, which is my name, is anything before the link and the description is my link text.

After I get all my links in, usually by Sunday, I’m ready to generate the links. Let’s look at how I make that happen.

This is all done with a simple console app. It isn’t enterprise-ready; it’s for me. So I didn’t go crazy with any of it—the point of this is something quick to get my time back.

To interact with the Pinboard API, I’m using the Pinboard.net NuGet package, a wonderful C# wrapper that makes connecting to the API so easy. Thanks to Shrayas Rajagopal for your work on this!

I was able to use C# 9 top-level programs to avoid the Main method ceremony.

At the top of the program, I do the following:

using var pb = new PinboardAPI("my-api-key");
var bookmarksList = await pb.Posts.All();
await WriteMarkdownFile(bookmarksList);

I connect to Pinboard using my API key and get all my bookmarks (referred to as Posts). Then, everything happens in my WriteMarkdownFile method.

I define the filePath on my system, and include today’s date in the name.

var filePath = $"C:\\path\\to\\site\\_drafts\\{DateTime.Now:yyyy-MM-dd}-dotnet-stacks.markdown";

All my links are categorized by tags. To avoid hardcoding, I have a Tags class to store the tag names (what I use in Pinboard) and the heading (what is in the newsletter):

public static class Tags
{
    public const string AnnouncementsHeading = "📢 Announcements";
    public const string AnnouncementsTag = "Announcements";

    public const string BlazorHeading = "😎 Blazor";
    public const string BlazorTag = "Blazor";

    // and so on and so forth
}

Back to the WriteMarkdownFile method, I store these in a dictionary. Depending on the week, I change the order of these, so I want that flexibility. (I could have sorting logic, I suppose.)

var tagInfo = new Dictionary<string, string>
{
    { Tags.AnnouncementsHeading, Tags.AnnouncementsTag },
    { Tags.CommunityHeading, Tags.CommunityTag },
    { Tags.BlazorHeading, Tags.BlazorTag },
    { Tags.DotNetCoreHeading, Tags.DotNetCoreTag },
    { Tags.CloudHeading, Tags.CloudTag },
    { Tags.LanguagesHeading, Tags.LanguagesTag },
    { Tags.ToolsHeading, Tags.ToolsTag },
    { Tags.XamarinHeading, Tags.XamarinTag },
    { Tags.PodcastsHeading, Tags.PodcastsTag },
    { Tags.VideoHeading, Tags.VideoTag }
};

Using a classic StringBuilder, I start to write out the file. I begin with my Jekyll front matter and the beginning heading, which is always the same:

var sb = new StringBuilder("---");
sb.AppendLine();
sb.AppendLine($"date: \"{DateTime.Now:yyyy-MM-dd}\"");
sb.AppendLine("title: \"The .NET Stacks: <fill in later>\"");
sb.AppendLine("tags: [dotnet-stacks]");
sb.AppendLine("comments: false");
sb.AppendLine("---");
sb.AppendLine();

sb.AppendLine("## 🌎 Last week in the .NET world");
sb.AppendLine();
sb.AppendLine("### 🔥 The Top 3");
sb.AppendLine();

Here’s the fun part, where I print out all the bookmarks:

foreach (var entry in tagInfo)
{
    sb.AppendLine($"### {entry.Key}");
    sb.AppendLine();

    foreach (string bookmark in GetBookmarksForTag(entry.Value, bookmarksList))
    {
        sb.AppendLine($"- {bookmark}");
    }

    sb.AppendLine();
}

For each tag in the dictionary, I create a heading with the tag text, then do a line break. Then, for each bookmark object, I have a method that retrieves a filtered list based on the tag. For each bookmark, I construct a string with how I want to format the link, then add it to a list.

static string[] GetBookmarksForTag(string tag, AllPosts allBookmarks)
{
    var filteredBookmarks = allBookmarks.Where(b => b.Tags.Contains(tag));
    var filteredList = new List<string>();

    foreach (var bookmark in filteredBookmarks)
    {
        var stringToAdd = $"{bookmark.Description} [{bookmark.Extended}]({bookmark.Href}).";
        filteredList.Add(stringToAdd);
    }

    return filteredList.ToArray();
}

Once I’m done with all the tags, I use a TextWriter to write to the file itself.

static string[] GetBookmarksForTag(string tag, AllPosts allBookmarks)
{
    var filteredBookmarks = allBookmarks.Where(b => b.Tags.Contains(tag));
    var filteredList = new List<string>();

    foreach (var bookmark in filteredBookmarks)
    {
        var stringToAdd = $"{bookmark.Description} [{bookmark.Extended}]({bookmark.Href}).";
        filteredList.Add(stringToAdd);
    }

    return filteredList.ToArray();
}

In just 76 lines of code, I was able to come up with something that saves me a ridiculous amount of time. There are other improvements to be made, like storing backups to Azure Storage, but I like it.

Here’s the full code, if you’d like:

using pinboard.net;
using pinboard.net.Models;
using PinboardToMarkdown;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

using var pb = new PinboardAPI("my_api_key");
var bookmarksList = await pb.Posts.All();
await WriteMarkdownFile(bookmarksList);

static async Task WriteMarkdownFile(AllPosts bookmarksList)
{
    var filePath = $"C:\\path\\to\\site\\_drafts\\{DateTime.Now:yyyy-MM-dd}-dotnet-stacks.markdown";

    var tagInfo = new Dictionary<string, string>
    {
          { Tags.AnnouncementsHeading, Tags.AnnouncementsTag },
          { Tags.CommunityHeading, Tags.CommunityTag },
          { Tags.BlazorHeading, Tags.BlazorTag },
          { Tags.DotNetCoreHeading, Tags.DotNetCoreTag },
          { Tags.CloudHeading, Tags.CloudTag },
          { Tags.LanguagesHeading, Tags.LanguagesTag },
          { Tags.ToolsHeading, Tags.ToolsTag },
          { Tags.XamarinHeading, Tags.XamarinTag },
          { Tags.PodcastsHeading, Tags.PodcastsTag },
          { Tags.VideoHeading, Tags.VideoTag }
    };

    var sb = new StringBuilder("---");
    sb.AppendLine();
    sb.AppendLine($"date: \"{DateTime.Now:yyyy-MM-dd}\"");
    sb.AppendLine("title: \"The .NET Stacks: <fill in later>\"");
    sb.AppendLine("tags: [dotnet-stacks]");
    sb.AppendLine("comments: false");
    sb.AppendLine("---");
    sb.AppendLine();

    sb.AppendLine("## 🌎 Last week in the .NET world");
    sb.AppendLine();
    sb.AppendLine("### 🔥 The Top 3");
    sb.AppendLine();

    foreach (var entry in tagInfo)
    {
        sb.AppendLine($"### {entry.Key}");
        sb.AppendLine();

        foreach (string bookmark in GetBookmarksForTag(entry.Value, bookmarksList))
        {
            sb.AppendLine($"- {bookmark}");
        }

        sb.AppendLine();
    }

    await using TextWriter stream = new StreamWriter(filePath);
    await stream.WriteAsync(sb.ToString());
}

static string[] GetBookmarksForTag(string tag, AllPosts allBookmarks)
{
    var filteredBookmarks = allBookmarks.Where(b => b.Tags.Contains(tag));
    var filteredList = new List<string>();

    foreach (var bookmark in filteredBookmarks)
    {
        var stringToAdd = $"{bookmark.Description} [{bookmark.Extended}]({bookmark.Href}).";
        filteredList.Add(stringToAdd);
    }

    return filteredList.ToArray();
}

Wrap up

In this post, I showed how you can use Pinboard and C# to send links to a Markdown page automatically. I showed why I like to use Pinboard, and we also stepped through the code to see how it all works.

If you have any suggestions, please let me know!

]]>
<![CDATA[ The .NET Stacks, #28: The future of MVC and themes of .NET 6 ]]> https://www.daveabrock.com/2020/12/05/dotnet-stacks-28/ 608c3e3df4327a003ba2fe78 Fri, 04 Dec 2020 18:00:00 -0600 Note: This is the published version of my free, weekly newsletter, The .NET Stacks. It was originally sent to subscribers on November 30, 2020. Subscribe at the bottom of this post to get the content right away!

Happy Monday to all. This week, we’ll be discussing:

  • The future of ASP.NET Core MVC APIs
  • Check out the “themes” of .NET 6
  • .NET Q&A: Not the Overflow you’re looking for
  • Last week in the .NET world

The future of ASP.NET Core MVC APIs

This week, David Fowler stopped by the ASP.NET community standup to talk about ASP.NET Core architecture. As you’d expect, it was quite informative. We learned how the team is addressing MVC, ASP.NET Core’s exception to the rule.

As ASP.NET Core has kept an eye on lightweight and modular services, MVC isn’t exactly a shining example of this concept. As with a lot of Microsoft tech, there are a lot of historical reasons for this—when thinking about MVC for ASP.NET Core, a lot of time was spent merging APIs and MVC web bits into one platform so performance analysis wasn’t given the light of day.

If you develop APIs with MVC, then, you’re paying a lot up front. When a lot of use cases are using it for CRUD operations over HTTP, do you need all the extension filters, formatters, and so on? Why pay for all of MVC if you aren’t using it? As said in the standup, we’re now looking at a “framework for framework authors” where all the abstractions aren’t used by 99% of developers.

Fowler and team are working on something called “Project Houdini”—an effort to “make MVCs disappear.” Don’t worry, MVC will always be around, but the effort revolves pushing MVC productivity features to the core of the stack, and not being a special citizen. Along the way, performance should improve. Fowler showed off a way to generate imperative APIs for you at compile time using source generation (allowing you to keep the traditional MVC controller syntax).

As a result, you’re shipping super-efficient code much closer to the response pipe. And when AOT gets here, it’ll benefit from runtime performance and treeshaking capabilities. Stay tuned: there’s a lot to wrap your head around here, with more information to come.

Check out the “themes” of .NET 6

It’s a fun time in the .NET 6 release cycle—there’s a lot of design and high-level discussions going on, as it’ll be 11 months before it actually ships. You can definitely look at things like the .NET Product Roadmap, but .NET PM Immo Landwerth has built a Themes of .NET site, which shows off the GitHub themes, epics, and stories slotted for .NET 6. (As if you had to ask, yes, it’s built with Blazor.)

As Immo warns, this is all fluid and not a committed roadmap—it will change frequently as the team’s thinking evolves. Even at this stage, I thought it was interesting to filter by the Priority 0 issues. These aren’t guarantees to make it in, but it is what the team currently is prioritizing the highest.

📲 With Xamarin coming to .NET 6 via MAUI, there’s predictably a lot of Priority 0 work outlined here—from managing mobile .NET SDKs, improving performance, and using .NET 6 targets.

🏫 I’m happy to see an epic around appealing to new developers and students, and there are epics around teaching entire classes in .NET Notebooks in VS Code and making setup easier. They’re also prioritizing democratizing machine learning with stories for using data when training in the cloud and better data loading options.

🌎 Blazor developers are anxious for enabling ahead-of-time (AOT) compilation, with stories around compiling .NET apps into WASM and AOT targeting.

✅ Acknowledging carryover from .NET 5 promises, there are stories for building libraries for HTTP/3 and confidently generating single file apps for supported target platforms. There’s also more on app trimming planned, especially when using System.Text.Json.

📈 As promised, a lot to improve the inner-loop performance—plans for the BCL to support hot reloading (and for Blazor to support it), improving MSBuild performance, and an improved experience for Xamarin devs.

There’s so much to look through if you’ve got the time, and things are going to move around a lot—there’s a ton of XL-sized Priority 0 issues, for example—but it’s always nice to geek out.

Microsoft Q&A for .NET: Not the Overflow you’re looking for

This week, Microsoft announced Microsoft Q&A for .NET. The Q&A experience, a replacement for Microsoft’s older forum platforms like MSDN and TechNet, was first rolled out last October (and went GA in May 2020). The inevitable question: “Why not just use and/or buy Stack Overflow?” I find it interesting that this week’s post didn’t mention what every developer was thinking, but I’ve explored Microsoft’s thoughts from last year:

We love Stack Overflow. We will continue supporting our customers who ask questions there. In the future, we will introduce a feature that points askers on Microsoft Q&A to relevant answers from Stack Overflow … However, Stack Overflow has specific criteria about what questions are appropriate for the community and Microsoft Q&A will have a more open policy regarding this. More importantly, via Microsoft Q&A we can create unique experiences that allow us to provide the highest level of support for our customers.

In Microsoft’s defense, the line “I see too many questions on SO that I believe are viable in any normal support scenario, but get closed and downvoted because people didn’t follow the rules” is all too familiar. There’s a lot of value in a Microsoft-focused platform centered around answers, not reputation.

In addition, Microsoft is looking to own the experience with additional support channels, badges, and integration with other Microsoft services. I don’t think Stack Overflow is getting nervous—the Q&A UX is similar to the MSDN forums, and that’s not a compliment—but the platform is there if you want to try it. I’ll be curious to see how it evolves.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

A light week because of the US Thanksgiving Holiday, so just one community standup—for ASP.NET, David Fowler joins to talk about ASP.NET Core architecture.

😎 ASP.NET Core / Blazor

🚀 .NET 5

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

]]>
<![CDATA[ Use ASP.NET Core route-to-code for simple JSON APIs ]]> https://www.daveabrock.com/2020/12/04/migrate-mvc-to-route-to-code/ 608c3e3df4327a003ba2fe77 Thu, 03 Dec 2020 18:00:00 -0600 When you need to write an API in ASP.NET Core, you’ve traditionally been forced to use ASP.NET Core MVC. While it’s very mature and full-featured, it also runs against the core principles of ASP.NET Core—it’s not lightweight or as efficient as other ASP.NET Core offerings. As a result you’re saddled with using all of a framework, even if you aren’t using a lot of its features. In most cases, you’re doing your logic somewhere else and the execution context (the CRUD actions over HTTP) deserves better.

This doesn’t even include the pitfalls of using controllers and action methods. The MVC pattern invites you to abuse controllers with dependencies and bloat, as much as we preach to other developers about the importance of thin controllers. Microsoft is very aware. This week, in the ASP.NET Standup, architect David Fowler mentioned Project Houdini—an effort to help make APIs over MVC more lightweight and performant (more on this at the end of this post).

We now have an alternative called “route-to-code.” You can now write ASP.NET Core APIs by connecting routing with your middleware. As a result, your code reads the request, writes the response, and you aren’t reliant on a heavy framework and its advanced configuration. It’s a wonderful pipe-to-pipe experience for simple JSON APIs.

To be clear—and Microsoft will be the first to tell you this—it is for basic JSON APIs that don’t need things like model binding, content negotiation, and advanced validation. For those scenarios, ASP.NET Core MVC will always be available to you.

Luckily for me, I just wrote a sample API in MVC to look at the HttpRepl tool. This is a great project to use to convert to route-to-code and show my experiences.

This post contains the following content.

How does route-to-code work?

In route-to-code, you specify the APIs in your project’s Startup.cs file. In this file (or even in another one), you define the routing and API logic in an application request’s pipeline in UseEndpoints.

To use this, ASP.NET Core gives you three helper methods to use:

  • ReadFromJsonAsync - reads JSON from the request and deserializes it to a given type
  • WriteAsJsonAsync - writes a value as JSON to the response body (and also sets the response type to application/json).
  • HasJsonContentType - a boolean method that checks the Content-Type header for JSON

Because route-to-code is for basic APIs only, it does not currently support:

  • Model binding or validation
  • OpenAPI (Swagger UI)
  • Constructor dependency injection
  • Content negotiation

There are ways to work around this, as we’ll see, but if you find yourself writing a lot of repetitive code, it’s a good sign that you might be better off leveraging ASP.NET Core MVC.

A quick tour of the route-to-code MVC project

I’ve got the route-to-code project out on GitHub. I won’t go through all the setup code but will just link to the key parts if you care to explore further.

In the sample app, I’ve wired up an in-memory Entity Framework database. I’m using a SampleContext class that uses Entity Framework Core’s DbContext. I’ve got a SeedData class that populates the database with a model using C# 9 records.

In the ConfigureServices middleware in Startup.cs, I add the in-memory database to the container, then flip on the switch in Program.cs.

Migrate our APIs from MVC to route-to-code

In this post, I’ll look at the GET, GET (by id), POST, and DELETE methods in my MVC controller and see what it takes to migrate them to route-to-code.

Before we do that, a quick note. In MVC, I’m using constructor injection to access my data from my SampleContext. Because context is a common term in route-to-code, I’ll be naming it repository in route-to-code.

Anyway, here’s how the constructor injection looks in the ASP.NET Core MVC project:

using HttpReplApi.Data;
using Microsoft.AspNetCore.Mvc;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using HttpReplApi.ApiModels;

namespace HttpReplApi.Controllers
{
    [Produces("application/json")]
    [Route("[controller]")]
    public class BandsController : ControllerBase
    {
        private readonly SampleContext _context;

        public BandsController(SampleContext context)
        {
            _context = context;
        }

        // action methods

    }
}

In the following sections, I’ll go through all the endpoints and see what it takes to move to route-to-code.

The “get all items” endpoint

For a “get all” endpoint, the MVC action method is pretty self-explanatory. Since I’ve injected the context and the data is ready for me to query, I just need to get it.

[HttpGet]
public ActionResult<IEnumerable<Band>> Get() =>
    _context.Bands.ToList();

As mentioned earlier, all the endpoint routing and logic in route-to-code will be inside of app.UseEndpoints:

app.UseEndpoints(endpoints =>
{
    // endpoints here
};

Now, we can write a MapGet call to define and configure our endpoint. Take a look at this code and we’ll discuss after.

endpoints.MapGet("/bands", async context =>
{
    var repository = context.RequestServices.GetService<SampleContext>();
    var bands = await repository.Bands.ToListAsync();
    await context.Response.WriteAsJsonAsync(bands);
});

I’m requesting an HTTP GET endpoint with the /bands route template. In this case, the context is HttpContext, where I can request services, get query strings, and read and write JSON.

I can’t use constructor-based dependency injection (DI) using route-to-code. Because there’s no framework to inject services into, we need to manually resolve these services. So, from HttpContext.RequestServices, I can call GetService to resolve my SampleContext. For any services with a transient or scoped lifetime, you need to use the RequestServices. For services with singleton scopes, like loggers, you can declare it from the service provider. In those cases, you can use those across different requests. But in our case, we’ll need to repeat the repository service discovery for every endpoint, which is a drag.

Finally, I’ll send back the Bands collection as JSON using the Response.WriteAsJsonAsync method.

The “get by id” endpoint

What about the most common use case—getting an item by id? Here’s how we do it in an MVC controller:

[HttpGet("{id}")]
public async Task<ActionResult<Band>> GetById(int id)
{
    var band = await _context.Bands.FindAsync(id);

    if (band is null)
    {
        return NotFound();
    }

    return band;
}

In route-to-code, here’s how it looks:

endpoints.MapGet("/bands/{id}", async context =>
{
    var repository = context.RequestServices.GetService<SampleContext>();
    var id = context.Request.RouteValues["id"];
    var band = await repository.Bands.FindAsync(Convert.ToInt32(id));

    if (band is null)
    {
        context.Response.StatusCode = StatusCodes.Status404NotFound;
        return;
    }
    await context.Response.WriteAsJsonAsync(band);
});

The route matches a passed in id from the route. I need to fetch it by accessing it in Request.RouteValues. One note that isn’t in the documentation: because there’s no model binding, I need to manually convert the id string to an integer. After I figured that out, I was able to call FindAsync from my context, validate it, then write it in the WriteAsJsonAsync method.

The post endpoint

Here’s where things can get interesting—how can we post without model binding or validation?

Here’s how the existing controller works in ASP.NET Core MVC.

[HttpPost]
public async Task<ActionResult<Band>> Create(Band band)
{
    _context.Bands.Add(band);
    await _context.SaveChangesAsync();

    return CreatedAtAction(nameof(GetById), new { id = band.Id }, band);
}

In route-to-code, here’s what I wrote:

endpoints.MapPost("/bands", async context =>
{
    var repository = context.RequestServices.GetService<SampleContext>();

    if (!context.Request.HasJsonContentType())
    {
        context.Response.StatusCode = StatusCodes.Status415UnsupportedMediaType;
        return;
    }

    var band = await context.Request.ReadFromJsonAsync<Band>();
    await repository.SaveChangesAsync();
    await context.Response.WriteAsJsonAsync(band);
});

As we saw earlier, we have a HasJsonContentType method at our disposal to see if I’m getting JSON. If not, I can set the appropriate status code and return.

Since I have JSON (no validation or binding, just verification that it is JSON), I can read it in and save it to the database. Once I’m done, I can either WriteAsJsonAsync to give the caller back the new record, or do something like this:

context.Response.StatusCode = StatusCodes.Status201Created;
return;

The WriteAsJsonAsync will return a 200, and only a 200. Remember, this needs to be done by hand because we don’t have a framework.

The delete endpoint

For our DELETE API, we’ll see it’s quite similar to POST (we’re just doing the opposite action).

Here’s what we do in the traditional MVC controller:

[HttpDelete("{id}")]
public async Task<IActionResult> Delete(int id)
{
    var band = await _context.Bands.FindAsync(id);

    if (band is null)
    {
        return NotFound();
    }

    _context.Bands.Remove(band);
    await _context.SaveChangesAsync();

    return NoContent();
}

Here’s what I wrote in the route-to-code endpoint.

endpoints.MapDelete("/bands/{id}", async context =>
{
    var repository = context.RequestServices.GetService<SampleContext>();

    var id = context.Request.RouteValues["id"];
    var band = await repository.Bands.FindAsync(Convert.ToInt32(id));

    if (band is null)
    {
        context.Response.StatusCode = StatusCodes.Status404NotFound;
        return;
    }

    repository.Bands.Remove(band);
    await repository.SaveChangesAsync();
    context.Response.StatusCode = StatusCodes.Status204NoContent;
    return;
});

As you can see, there’s a lot of considerations to make even for ridiculously simple APIs like this one. It’s tough to work around dependency injection if you rely on it in your application.

That’s the price you pay for using a no-framework solution like this one. In return you get simplicity and performance. It’s up to you to decide if you need all the bells and whistles or if you only need route-to-code.

The path forward

The use case for this is admittedly small, so here’s a thought: what if I could enjoy these benefits in MVC? That’s the impetus behind “Project Houdini”, which David Fowler discussed in this week’s ASP.NET Standup.

This effort focuses on pushing MVC productivity features to the core of the stack with an eye on performance. Fowler showed off a way to generate these route-to-code APIs for you at compile time using source generation—allowing you to keep writing the traditional MVC code. Of course, when AOT comes with .NET 6, it’ll benefit from performance and treeshaking capabilities.

Wrap up

In this post, we discussed the route-to-code API style, and how it can help you write simpler APIs. We looked at how to migrate controller code to route-to-code, and also looked at the path forward.

]]>
<![CDATA[ The .NET Stacks #27: Giving some 💜 to under-the-radar ASP.NET Core 5 features ]]> https://www.daveabrock.com/2020/11/28/dotnet-stacks-27/ 608c3e3df4327a003ba2fe76 Fri, 27 Nov 2020 18:00:00 -0600 Note: This is the published version of my free, weekly newsletter, The .NET Stacks. It was originally sent to subscribers on November 23, 2020. Subscribe at the bottom of this post to get the content right away!

Happy Monday to you all. With .NET 5 out the door and the holidays approaching, things aren’t as crazy. To that end, I wanted to check in on some ASP.NET Core 5 features that might have flown under the radar—and as always, a busy week with our wonderful .NET community.

Giving some 💜 to under-the-radar ASP.NET Core 5 features

As you may have heard, .NET 5 is officially here. Last week, we geeked out with .NET Conf. For .NET web developers, Microsoft is pushing Blazor on you hard. And for good reason: it’s the future of ASP.NET and a game-changer in many ways. Bringing C# to the browser is a big deal.

As a result, Blazor is getting a great deal of the ASP.NET Core 5 attention. It’s important to note that there are still tons of ASP.NET Core developers that aren’t moving to Blazor for a variety of reasons. To be clear, I’m not saying that Microsoft is neglecting these developers—they are not. Also, since Blazor resides in the ASP.NET Core ecosystem, many Blazor-related enhancements help ASP.NET Core as a whole!

However, with all the attention paid to Blazor, a lot of other ASP.NET Core 5 enhancements may have flown under your radar. So this week, I’d like to spend some time this week focusing on non-Blazor ASP.NET Core 5 improvements.

(Also, while it’s very much in flight, the .NET team released the initial ASP.NET Core 6 roadmap for your review.)

⛅ Azure app service supports latest .NET version automatically

You may have noticed that Azure App Service supported .NET 5 on Day 1—meaning when you eagerly downloaded the SDK, you were able to deploy your web apps to Azure App Service. This is thanks to an Early Runtime Feature, which now allows for automatic support of subsequent .NET releases as soon as they’re released. You no longer have to patiently wait for latest-version support.

Before diving in, though, get familiar with how it all works.

🆕 Model binding works with C# 9 record types

With records, immutability is making a big imprint in the C# ecosystem. This can allow you to greatly simplify your need for “data classes”—a big use case is with models that contain properties and no behavior.

ASP.NET Core 5 now can be used for model binding in MVC controllers or Razor Pages. I wrote about it last week. Look at how much simpler I can make things.

Our title bar in action

✅ API improvements

I did discuss this a few weeks ago—and also wrote about it last week—but when you create a Web API project in ASP.NET Core 5 (either from Visual Studio or the dotnet CLI), OpenAPI is enabled by default. This includes adding the Swashbuckle package into your middleware automatically, which allows you to use Swagger UI out-of-the-box. In Visual Studio, just hit F5 to explore your APIs.

Also, the FromBody attribute now supports options for optional properties, and new JSON extension methods can help you write lightweight APIs.

🚀 HTTP/2 and gRPC performance improvements

In .NET 5, gRPC has been given the first-class treatment. If you aren’t familiar, gRPC is a contract-first, open-source remote procedure call (RPC) framework that enables real-time streaming and end-to-end code generation—and leaves a small network footprint with its binary serialization.

With .NET 5, gRPC gets the highest requests per second after Rust. This is thanks to reduced HTTP/2 in Kestrel, improvements with reading HTTP headers, and support for Protobuf message serialization.

We are anxiously awaiting Azure App Service and IIS support. Keep an eye on this GitHub issue for updates.

🔐 AuthN and AuthZ improvements

There’s been quite a few improvements with authentication and authorization.

First, Microsoft has developed a new Microsoft.Identity.Web library that allows you to handle authentication with Azure Active Directory. You can use this to access Azure resources in an easier way, including with Microsoft Graph. This is supported in ASP.NET Core 5.

Joseph Guadagno has written some nice posts on the topic—and as a long-time subscriber of The .NET Stacks, you can trust his judgment. 😉

You can also allow anonymous access to an endpoint, provide custom handling of authorization failures, and access authorization when using endpoint routing.

📦 Container performance enhancements

Before .NET 5, when you published a Dockerfile you needed to pull the whole .NET Core SDK and the ASP.NET Core image. Now, the download size for the SDK is greatly reduced and the runtime image download is almost eliminated (only pulling the manifest). You can check out the GitHub issue to see improvement numbers for various environments.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 ASP.NET Core / Blazor

⛅ The cloud

📔 C# posts

📗 F# posts

🔧 Tools

📱 Xamarin

🏴‍☠️ Other finds

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Use Azure Functions, Azure Storage blobs, and Cosmos DB to copy images from public URLs ]]> https://www.daveabrock.com/2020/11/25/images-azure-blobs-cosmos/ 608c3e3df4327a003ba2fe75 Tue, 24 Nov 2020 18:00:00 -0600 There are several reasons why you’ll want to host publicly accessible images yourself. For example, you may want to compress them or apply metadata for querying purposes on your site. For my Blast Off with Blazor project, I had that exact need.

In my case, I want to host images cheaply and also work with the image’s metadata over a NoSQL database. Azure Storage blobs and Azure Cosmos DB were obvious choices, since I’m already deploying the site as an Azure Static Web App.

To accomplish this, I wrote a quick Azure Function that accomplishes both tasks. Let’s take a look at how this works.

(Take a look at the repository for the full treatment.)

Before you begin: create Azure resources

Before you begin, you need to set up the following:

Once I created all these resources, I added the configuration values to my local.settings.json file. These values are available in the portal when you browse to your various resources. (When you deploy your resources, they’ll need to be added to your configuration.)

{
  "Values": {
    "ApiKey": "my-nasa-api-key",
    "CosmosEndpoint": "url-to-my-cosmos-endpoint",
    "CosmosKey": "my-cosmos-primary-key",
    "CosmosDatabase": "name-of-my-cosmos-db",
    "CosmosContainer": "name-of-my-cosmos-container",
    "StorageAccount": "storage-account-name",
    "StorageKey": "storage-key",
    "BlobContainerUrl": "url-to-my-container"
  }
}

The Azure function

In my Azure function I’m doing three things:

  • Call the NASA Image of the Day API to get a response with image details—including URL, title, description, and so on
  • From the URL in the response payload, copy the image to Azure Storage
  • Then, update Cosmos DB with the URL of the new resource, and the other properties in the object

If we look at the Astronomy Picture of the Day site, it hosts an image and its metadata for the current day. I want to put the image in Storage Blobs and the details in Cosmos DB.

The Astronomy Picture of the Day site

Here’s how the function itself looks:

//using System;
//using System.Threading.Tasks;
//using Microsoft.AspNetCore.Mvc;
//using Microsoft.Azure.WebJobs;
//using Microsoft.Azure.WebJobs.Extensions.Http;
//using Microsoft.AspNetCore.Http;
//using Microsoft.Extensions.Logging;
//using Newtonsoft.Json;
//using System.Net.Http;
//using Azure.Storage;
//using Azure.Storage.Blobs;
//using System.Linq;
//using Microsoft.Azure.Cosmos;

public class ImageUploader
{
    private readonly HttpClient httpClient;
    private CosmosClient cosmosClient;

    public ImageUploader()
    {
        httpClient = new HttpClient();
        cosmosClient = new CosmosClient(Environment.GetEnvironmentVariable("CosmosEndpoint"),
                                        Environment.GetEnvironmentVariable("CosmosKey"));
    }

    [FunctionName("Uploader")]
    public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "upload")] HttpRequest req, ILogger log)
    {
        var apiKey = Environment.GetEnvironmentVariable("ApiKey");
        var response = await httpClient.GetAsync($"https://api.nasa.gov/planetary/apod?api_key={apiKey}");
        var result = await response.Content.ReadAsStringAsync();

        var imageDetails = JsonConvert.DeserializeObject<Image>(result);

        await UploadImageToAzureStorage(imageDetails.Url);
        await AddImageToContainer(imageDetails);
        return new OkObjectResult("Processing complete.");
    }
}

After I get the response back from the API, I deserialize it to an Image model and then do the work in Azure (keep reading for those details).

Here is the model:

using Newtonsoft.Json;
using System;

namespace MassImageUploader
{
    public class Image
    {
        [JsonProperty("id")]
        public Guid Id { get; set; }

        [JsonProperty("title")]
        public string Title { get; set; }

        [JsonProperty("copyright")]
        public string Copyright { get; set; }

        [JsonProperty("date")]
        public DateTime Date { get; set; }

        [JsonProperty("explanation")]
        public string Explanation { get; set; }

        [JsonProperty("url")]
        public string Url { get; set; }
    }
}

Upload image to Azure Storage blob container

In my UploadImageToAzureStorage method, I pass along the URI of the publicly accessible image. From that, I get the fileName by extracting the last part of the URI (for example, myfile.jpg). With the fileName I build the path of my new resource in the blobUri—that path will include the Azure Storage container URL and the fileName.

After that, I pass my credentials to a new StorageSharedKeyCredential, and instantiate a new BlobClient. Then, the copy occurs in the StartCopyFromUriAsync, which passes in a URI.

private async Task<bool> UploadImageToAzureStorage(string imageUri)
{
    var fileName = GetFileNameFromUrl(imageUri);
    var blobUri = new Uri($"{Environment.GetEnvironmentVariable("BlobContainerUrl")}/{fileName}");
    var storageCredentials = new StorageSharedKeyCredential(
        Environment.GetEnvironmentVariable("StorageAccount"),
        Environment.GetEnvironmentVariable("StorageKey"));
    var blobClient = new BlobClient(blobUri, storageCredentials);
    await blobClient.StartCopyFromUriAsync(new Uri(imageUri));
    return await Task.FromResult(true);
}

private string GetFileNameFromUrl(string urlString)
{
    var url = new Uri(urlString);
    return url.Segments.Last();
}

If I browse to my Azure Storage container, I’ll see my new image.

The result

Push metadata to Cosmos DB

With my image successfully stored, I can push the metadata to an Azure Cosmos DB container. In my AddImageToContainer method, I pass in my populated Image model. Then I’ll get the Azure Cosmos DB container, get the path, then call CreateItemAsync while passing in the image. Easy.

private async Task<bool> AddImageToContainer(Image image)
{
    var container = cosmosClient.GetContainer(
        Environment.GetEnvironmentVariable("CosmosDatabase"),
        Environment.GetEnvironmentVariable("CosmosContainer"));

    var fileName = GetFileNameFromUrl(image.Url);

    image.Id = Guid.NewGuid();
    image.Url = $"{Environment.GetEnvironmentVariable("BlobContainerUrl")}/{fileName}";

    await container.CreateItemAsync(image);
    return await Task.FromResult(true);
}

In the Data Explorer for my Cosmos DB resource in the Azure portal, I can see my new record.

The result

Wrap up

In this post, we learned how to store a publicly accessible image in Azure Storage, then post that URL and other metadata in Azure Cosmos DB.

As always, feel free to post any feedback or experiences in the comments.

]]>
<![CDATA[ Blast Off with Blazor: Isolate and test your service dependencies ]]> https://www.daveabrock.com/2020/11/22/blast-off-blazor-service-dependencies/ 608c3e3df4327a003ba2fe74 Sat, 21 Nov 2020 18:00:00 -0600 So far in our series, we’ve walked through the intro, wrote our first component, and dynamically updated the HTML head from a component.

I’ve made testing a crucial part of this project and not an afterthought—as discussed previously, we’re using the bUnit project to unit test our components. As I discussed last time, though, testing our Index component was a little cumbersome because of the HttpClient dependency. There are ways to mock and test it, but we should ask … why are we injecting it directly?

It was great to inject it easily to get up and running but what happens as we build more components and more APIs to call, each with different endpoints and request headers? How will we manage that? And if we want to unit test our components, will I have to mock an HttpClient every time? What a nightmare.

Instead, we’ll create an API wrapper and inject that in our components. Any service-level implementation details can be abstracted away from the component. Along the way, we’ll learn about working with separate C# classes, using a named IHttpClientFactory, and how to quickly mock and test a service in bUnit. Let’s get started.

This post contains the following content.

Does my code always have to reside in @code blocks?

To recap, here’s how our main component currently looks.

@page "/"
@inject HttpClient http

@if (image != null)
{
    <div class="p-4">
        <h1 class="text-6xl">@image.Title</h1>
        <p class="text-2xl">@FormatDate(image.Date)</p>
        @if (image.Copyright != null)
        {
            <p>Copyright: @image.Copyright</p>
        }
    </div>

     <div class="flex justify-center p-4">
        <img src="@image.Url" class="rounded-lg h-500 w-500 flex items-center justify-center"><br />
    </div>
}

@code {
    private Data.Image image;

    private string FormatDate(DateTime date) => date.ToLongDateString();

    protected override async Task OnInitializedAsync()
    {
        image = await http.GetFromJsonAsync<Data.Image>("api/image");
    }
}

While we don’t have a lot of lines of code in the @code block, there’s still a lot going on in this component. We’re directly injecting HttpClient to directly call our Azure Function. In the @code section I’ve written a helper method as well as OnInitializedAsync behavior. As we add more features and functionality, that @code block is only going to grow. We can definitely keep the C# coupled with our Razor syntax, as it makes it easy to see all that’s going on in one file—but we also can move all of this to a separate C# file for reuse and maintainability purposes.

This is a “code-behind” approach, as the code will sit behind the view logic in a partial class. To do this, we’ll create an Index.razor.cs file. If you’re using Visual Studio, you’ll see it’s nested “inside” the Blazor component.

Cut and paste everything inside the @code block to the new file. You’ll see some build errors and will need to resolve some dependencies. To resolve these:

  • Make the new file a partial class
  • Add a using statement for Microsoft.AspNetCore.Components
  • With the using added, inherit ComponentBase

What about injecting HttpClient, though? We can’t carry over that Razor syntax to our C# file. Instead, we’ll add it as a property with an Inject annotation above it.

Here’s how the class looks:

using Client.Services;
using Data;
using Microsoft.AspNetCore.Components;
using System;
using System.Threading.Tasks;

namespace Client.Pages
{
    partial class Index : ComponentBase
    {
        Image _image;

        [Inject]
        public HttpClient http { get; set; }

        private static string FormatDate(DateTime date) => date.ToLongDateString();

        protected override async Task OnInitializedAsync()
        {
            image = await http.GetFromJsonAsync<Data.Image>("api/image");
        }
    }
}

Now, when we remove the @code block and HttpClient injection, our component looks cleaner:


@page "/"

@if (_image is null)
{
    <p>Loading...</p>
}
else
{
    <div class="p-4">
        <h1 class="text-6xl">@_image.Title</h1>
        <p class="text-2xl">@FormatDate(_image.Date)</p>
        @if (_image.Copyright != null)
        {
            <p>Copyright: @_image.Copyright</p>
        }
    </div>

    <div class="flex justify-center p-4">
        <img src="@_image.Url" class="rounded-lg h-500 w-500 flex items-center justify-center"><br />
    </div>
}

If we run the project, it’ll work as it always has. Now, let’s build out an API wrapper.

Add an API service wrapper to our project

We’re now ready to build our service. In our Client project, create an ApiClientService.cs file inside a Services folder. We’ll stub it out for now with an interface to boot:

public interface IApiClientService
{
    public Task<Image> GetImageOfDay();
}

public class ApiClientService : IApiClientService
{
    public async Task<Image> GetImageOfDay()
    {
        throw new NotImplementedException();
    }
}

We’ll also want to add the new folder to bottom of our _Imports.razor file:

@using Services

How to call HttpClient from our app

We could still call HttpClient directly, but over the course of this project we’ll be connecting to various APIs with different endpoints, different headers, and so on. As we look forward, we should create an IHttpClientFactory. This allows us to work with named instances, allows us to delegate middleware handlers, and manages the lifetime of handler instances for us.

To add a factory to our project, we’ll add a named client to our Program.cs file. While we’re here, we’ll inject our new IApiClientService as well.

public static async Task Main(string[] args)
{
    // important stuff removed for brevity
    builder.Services.AddHttpClient("imageofday", iod =>
    {
        iod.BaseAddress = new Uri(builder.Configuration["API_Prefix"] ?? builder.HostEnvironment.BaseAddress);
    });
    builder.Services.AddScoped<IApiClientService, ApiClientService>();
}

In AddHttpClient, I’m specifying a URI, and referencing my API as imageofday. With that in place, I can scoot over to ApiClientService and make it work.

First, let’s inject our ILogger and IHttpClientFactory in the constructor.

public class ApiClientService : IApiClientService
{
    readonly IHttpClientFactory _clientFactory;
    readonly ILogger<ApiClientService> _logger;

    public ApiClientService(ILogger<ApiClientService> logger, IHttpClientFactory clientFactory)
    {
        _clientFactory = clientFactory;
        _logger = logger;
    }
}

In our GetImageOfDay logic, we’ll create our named client and use it to call to our Azure Function at the api/image endpoint. Of course, we’ll catch any exceptions and log them appropriately.

public async Task<Image> GetImageOfDay()
{
    try
    {
        var client = _clientFactory.CreateClient("imageofday");
        var image = await client.GetFromJsonAsync<Image>("api/image");
        return image;
    }
    catch (Exception ex)
    {
        _logger.LogError(ex.Message, ex);
    }

    return null;
}

Inject our new service from our component

With the service wrapper now complete, we can inject our new service instead of direct dependency on HttpClient. Change the HttpClient injection in Index.razor.cs to our new service instead:

[Inject]
public IApiClientService ApiClientService { get; set; }

// stuff

protected override async Task OnInitializedAsync()
{
    _image = await ApiClientService.GetImageOfDay();
}

If you run the app, you should see no changes—we didn’t have to modify the markup at all—but we’ve made our lives a lot easier and now testing should be a snap as we add to our project).

Test our component

With our API wrapper in place, testing is a whole lot easier. I’ve created an ImageOfDayTest class in our Test library.

I’ll be adding a reference to Moq, a popular mocking library, to mimic the response back from our service. You can download the package from NuGet Package Manager or just drop this in Test.csproj:

<ItemGroup>
    <PackageReference Include="Moq" Version="4.15.1" />
</ItemGroup>

I’ll build out a sample Image to return from the service. I’ll create a private helper method for that:

private static Image GetImage()
{
    return new Image
    {
        Date = new DateTime(2020, 01, 01),
        Title = "My Sample Image",
        Url = "https://nasa.gov"
    };
}

In my test case, I’ll mock my client, return an image, and inject it into my bUnit’s TestContext:

var mockClient = new Mock<IApiClientService>();
mockClient.Setup(i => i.GetImageOfDay()).ReturnsAsync(GetImage());

using var ctx = new TestContext();
ctx.Services.AddSingleton(mockClient.Object);

Note: The integration test vs. mocking in a unit test is a hot topic, especially when testing dependencies. My intent here is to unit test rendering behavior and not my services, but calling the endpoint from the test is also an option if you’re up for it.

With that in place, I can render my component, and assert against expected output with the following code:

var cut = ctx.RenderComponent<Client.Pages.Index>();
var h1Element = cut.Find("h1").TextContent;
var imgElement = cut.Find("img");
var pElement = cut.Find("p");

h1Element.MarkupMatches("My Sample Image");
imgElement.MarkupMatches(@"<img src=""https://nasa.gov"" 
    class=""rounded-lg h-500 w-500 flex items-center justify-center"">");
pElement.MarkupMatches(@"<p class=""text-2xl"">Wednesday, January 1, 2020</p>");

My tests pass—ship it!

Wrap up

In this post, we learned how to isolate HttpClient dependencies in our Blazor code. To do this, we moved our component’s C# code to a partial “code-behind class” and built a service that uses the IHttpClientFactory. Then, we were able to use bUnit to test our component quite easily.

Are refactorings sexy? No. Are they fun? Also no. Are they important? Yes. Is this the last question I’ll ask myself in this post? Also yes. In the next post, we’ll get back to updating the UI.

]]>
<![CDATA[ The .NET Stacks #26: .NET 5 has arrived, let's party ]]> https://www.daveabrock.com/2020/11/21/dotnet-stacks-26/ 608c3e3df4327a003ba2fe73 Fri, 20 Nov 2020 18:00:00 -0600 Note: This is the published version of my free, weekly newsletter: The .NET Stacks—originally sent to subscribers on November 16, 2020. Subscribe at the bottom of this post to get the content right away!

Happy Monday, everybody! I hope you have a wonderful week. We’ll be covering a few things this morning:

  • .NET 5 has arrived
  • Dependency injection gets a community review
  • Last week in .NET world

🥳.NET 5 has arrived

Well, we made it: .NET 5 is officially here. This week, Microsoft showed off their hard work during three busy days of .NET Conf—and with it, some swag including my new default Visual Studio Code theme. I’m starting to wonder if overhyping Blazor is possible—as great as it is, I wish Microsoft would dial it back a bit.

Anyway: we’ve spent the better part of six months discussing all the updates (like, my favorite things, Blazor’s readiness, the support model, what’s happening to .NET Standard, EF Core 5, app trimming, System.Text.Json vs. Newtonsoft, and so much more).

I know you’re here for the .NET Conf links—so let me be of service. You can see all the talks from this YouTube playlist, but the highlights are below.

From the Microsoft perspective, we had the following sessions:

From the community:

💉 Dependency injection gets a community review

During the day job, I’ve been teaching infrastructure engineers how to code in C#. With plenty of experience with scripting languages like PowerShell, explaining variables, functions, and loops isn’t really a big deal. Next week, however, I’ll have to discuss dependency injection and I’m not looking forward to it. That’s not because they won’t understand it, because they’re very smart and they definitely will (eventually). It’s because it’s a head-spinning topic to beginners but is essential to learn because ASP.NET Core is centered around it.

If you aren’t familiar with dependency injection, it’s a software design pattern that helps manage dependencies towards abstractions and not lower-level implementation details. New is glue, after all. In ASP.NET Core, we use interfaces (or base classes) to abstract dependencies away, and “register” dependencies in a service container (most commonly IServiceProvider in the ConfigureServices method in a project’s Startup class). This allows us to “inject” services (or groups of services) only when we need them.

Right on cue, this week’s edition of .NET Twitter Drama included the discussion of a Hacker News article criticizing the ASP.NET Core DI practice as overkill. The article mirrors a lot of HN: .NET rants with some valuable feedback nested inside.

In my experience, DI in general is a difficult concept to learn initially but sets you up for success for maintaining loose coupling and testability as your project matures. However, the initial ceremony can drive people away from ASP.NET Core as developers can view it as overkill if they think their project will never need it.

I do think the conversation this week helped folks understand that working with settings is very ceremonial and can lead to confusion (just look at the options pattern). This might foster .NET team discussions about doc updates and how to provide a better experience when working with configuration. A boy can dream, anyway.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

With .NET Conf, no community standups so a light section except: The .NET Docs Show codes a drone with Bruno Capuano.

🚀 .NET 5

😎 ASP.NET Core / Blazor

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Simplify your ASP.NET Core API models with C# 9 records ]]> https://www.daveabrock.com/2020/11/18/simplify-api-models-with-records/ 608c3e3df4327a003ba2fe72 Tue, 17 Nov 2020 18:00:00 -0600 Out of all the new capabilities C# 9 brings, records are my favorite. With positional syntax, they are immutable by default, which makes working with data classes a snap. I love the possibility of maintaining mutable state in C# where appropriate, like for business logic, and maintaining immutability (and data equality!) with records.

And did you know that with ASP.NET Core 5, model binding and validation supports record types?

In the last post about OpenAPI support in ASP.NET Core 5, I used a sample project that worked with three very simple endpoints (or controllers): Bands, Movies, and People. Each model was in its own class, like this:

Band.cs:

using System.ComponentModel.DataAnnotations;

namespace HttpReplApi.Models
{
    public class Band
    {
        public int Id { get; set; }

        [Required]
        public string Name { get; set; }
    }
}

Movie.cs:

using System.ComponentModel.DataAnnotations;

namespace HttpReplApi.Models
{
    public class Movie
    {
        public int Id { get; set; }

        [Required]
        public string Name { get; set; }
        public int ReleaseYear { get; set; }

    }
}

Person.cs:

using System.ComponentModel.DataAnnotations;

namespace HttpReplApi.Models
{
    public class Person
    {
        public int Id { get; set; }

        [Required]
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }
}

I can simplify these with using records instead. In the root, I’ll just create an ApiModels.cs file (I could have them in the controllers themselves, but that feels … messy):

using System.ComponentModel.DataAnnotations;

namespace HttpReplApi
{
    public record Band(int Id, [Required] string Name);
    public record Movie(int Id, [Required] string Name, int ReleaseYear);
    public record Person(int Id, [Required] string FirstName, string LastName);
}

After I change my SeedData class to use positional parameters, I am good to go—this took me about 90 seconds.

For some fun, if I grab a movie by ID, I can use the deconstruction support to get out my properties. (I’m using a discard since I’m not doing anything with the first argument, the Id.)

[HttpGet("{id}")]
public async Task<ActionResult<Movie>> GetById(int id)
{
    var movie = await _context.Movies.FindAsync(id);

    if (movie is null)
    {
        return NotFound();
    }

    var (_, name, year) = movie;
    _logger.LogInformation($"We have {name} from {year}");

    return movie;
}
]]>
<![CDATA[ Use OpenAPI, Swagger UI, and HttpRepl in ASP.NET Core 5 to supercharge your API development ]]> https://www.daveabrock.com/2020/11/16/httprepl-openapi-swagger-netcoreapis/ 608c3e3df4327a003ba2fe71 Sun, 15 Nov 2020 18:00:00 -0600 When developing APIs in ASP.NET Core, you’ve got many tools at your disposal. Long gone are the days when you run your app from Visual Studio and call your localhost endpoint in your browser.

Over the last several years, a lot of tools have emerged that use the OpenAPI specification—most commonly, the Swagger project. Many ASP.NET Core API developers are familiar with Swagger UI, a REST documentation tool that allows developers—either those developing it or developers consuming it—to interact with an API from a nice interface built from a project’s swagger.json file. In the .NET world, the functionality comes from the Swashbuckle library, at 80 million NuGet downloads and counting.

Additionally, I’ve been impressed by the HttpRepl project, which allows you explore your APIs from the command line. Different from utilities like curl, it allows you to explore APIs from the command-line similar to how you explore directories and files—all from a simple and lightweight interface. You can cd into your endpoints and call them quickly to achieve lightning-fast feedback. Even better, it has OpenAPI support (including the ability to perform OpenAPI validation on connect).

While these utilities are not new with ASP.NET Core 5, it’s now much easier to get started. This post will discuss these tools.

Before you get started, you should have an ASP.NET Core API ready—likely one with create-read-update-delete (CRUD) operations. Feel free to clone and use mine, if you’d prefer.

This post contains the following content.

What’s the difference between OpenAPI and Swagger?

If you’ve heard OpenAPI and Swagger used interchangeably, you might be wondering what the difference is.

In short, OpenAPI is a specification used for documenting the capabilities of your API. Swagger is a set of tools from SmartBear (both open-source and commercial) that use the OpenAPI specification (like Swagger UI).

Use Swagger UI with ASP.NET Core projects by default

For the uninitiated, the Swashbuckle project allows you to use Swagger UI—a tool that gives you the ability to render dynamic pages that allow to describe, document, and execute your API endpoints. Here’s how mine looks.

A high-level look at my Swagger page

And if you look at a sample POST (or any action), you’ll see a sample schema and response codes.

A sample post in Swagger

In previous versions of ASP.NET Core, you had to manually download the Swashbuckle package, configure the middleware, and optionally change your launchSettings.json file. Now, with ASP.NET Core 5, that’s baked in automatically.

A big driver for the default OpenAPI support is the integration with Azure API Management. Now with OpenAPI support, the Visual Studio publishing experience offers an additional step—to automatically import APIs into Azure API Management. To the cloud!

How it’s enabled, and how you can opt out

If you create an ASP.NET Core 5 Web API project, you’ll see an Enable OpenAPI support checkbox that’s enabled by default. Just deselect it if you don’t want it.

Visual Studio OpenAPI checkbox

If you create API projects using dotnet new webapi it’ll be baked in, too. (To opt out from OpenAPI support here, execute dotnet new webapi --no-openapi true.)

Verify the middleware and launchSettings.json

After you create a project, we can see it’s all done for us. If you open your .csproj file, you’ll see the reference:

<ItemGroup>
    <PackageReference Include="Swashbuckle.AspNetCore" Version="5.6.3" />
</ItemGroup>

Then, in the ConfigureServices method in your Startup.cs, you’ll see your Swagger doc attributes defined and injected (this information will display at the top of your Swagger UI page). You can definitely add more to this.

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllers();
    services.AddSwaggerGen(c =>
    {
        c.SwaggerDoc("v1", new OpenApiInfo { Title = "HttpReplApi", Version = "v1" });
    });
}

And, in the Configure method in that same file you see the configuration of static file middleware:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
        app.UseSwagger();
        app.UseSwaggerUI(c =>
        {
            c.SwaggerEndpoint("/swagger/v1/swagger.json", "HttpReplApi v1");
        });
    }

    // other services removed for brevity
}

Then, in launchSettings.json, you’ll see your project launches the Swagger UI page with the /swagger path, instead of a lonely blank page.

{
  "profiles": {
    "HttpReplApi": {
      "commandName": "Project",
      "dotnetRunMessages": "true",
      "launchBrowser": true,
      "launchUrl": "swagger",
      "applicationUrl": "https://localhost:5001;http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

Now that you’ve got this set up for you, you can go to town on documenting your API. You can add XML documentation and data annotations—two easy ways to boost your Swagger docs. For example, the Produces annotations help define the status codes to expect.

From my BandsController.cs, I use a lot of Produces annotations:

[HttpDelete("{id}")]
[ProducesResponseType(StatusCodes.Status204NoContent)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
[ProducesDefaultResponseType]
public async Task<IActionResult> Delete(int id)
{
  // delete by id logic
}

I love that this is available by default, but would love to see more OpenAPI scaffolding in the templates. It might be a little opinionated, but adding some Produces annotations in the default WeatherForecast controller would allow for a better developer experience with Swagger out of the box (since it is now out of the box).

Use HttpRepl for a great command-line experience

You can use HttpRepl to test your ASP.NET Core Web APIs from the command-line quickly and easily.

Install

You install HttpRepl as a .NET Core Global Tool, by executing the following from the .NET Core CLI:

dotnet tool install -g Microsoft.dotnet-httprepl

As a global tool, you can run it from anywhere.

Basic operations

To get started, I can launch my app and connect to it from httprepl:

httprepl https://localhost:5001

If I do an ls or dir, I can look at my endpoints:

Doing a dir from the root

To execute a get on my /bands endpoint, I get a response payload, with headers:

Doing a GET on bands

Then, I could say get {id} to get a Band by id:

Doing a get by id

Modify data with a default editor

What happens when you want to update some data? You could do something like this…

post --content "{"id":2,"name":"Tool"}"

…but that’s only manageable for the simplest of scenarios. You’ll want to set up a default text editor to work with request bodies. You can construct a request, then HttpRepl will wait for you to save and close the tab or window.

In HttpRepl, you can set various preferences, including a default editor. I’m using Visual Studio Code on Windows, so would execute something like this…

pref set editor.command.default "C:\Program Files\Microsoft VS Code\Code.exe"

If using Visual Studio Code, you’ll also want to pass a `–wait flag so VS Code waits for a closed file before returning.

pref set editor.command.default.arguments "--wait"

You can also set various other preferences, like color and indentation.

Now, when you say post from your endpoint, VS Code opens a .tmp file with a default request body, then waits for you to save and close it. The POST should process successfully.

Performing a post

With the ID in hand, I can do the same steps to modify using put 5, then run get 5 to confirm my changes:

Performing a put

If I want to delete my resource, I can execute delete 5 and it’ll be gone. I tweeted a video of me cycling through the CRUD operations.

Remove repetition with a script

If I find myself using the same test scenarios, I can throw these commands in a .txt file instead. For example, I created an api-script.txt file that I execute from the root of my httprepl session:

cd Bands
ls
get
post --content "{"id": 20, "name": "The Blue Medium Chili Peppers"}"
put 20 --content "{"id": 20, "name": "The Red Mild Chili Peppers"}"
get 20
delete 20

Then I can run it using run {path_to_script}.

Learn more about configuring HttpRepl

This is a straightforward walkthrough, but you can do a whole lot more. I was impressed with the robust configuration options. This includes manually pointing to the OpenAPI description, verbose logging, setting preferences, configuring formatting, customizing headers and streaming, logging to a file, using additional HTTP verbs, testing secured endpoints, accessing Azure-hosted endpoints, and even configuring your tools to launch HttpRepl.

All this information can be found at the official doc, wonderfully done by our friend Scott Addie.

Wrap up

In this post, we talked about the difference between OpenAPI and Swagger, using Swagger UI by default in your ASP.NET Core Web API projects, and how to use the HttpRepl tool.

As always, let me know your experience with these tools.

]]>
<![CDATA[ The .NET Stacks #25: .NET 5 officially launches tomorrow ]]> https://www.daveabrock.com/2020/11/13/dotnet-stacks-25/ 608c3e3df4327a003ba2fe70 Thu, 12 Nov 2020 18:00:00 -0600 With .NET 5 shipping this week, it’s going to be such a release.

On tap this week:

  • .NET 5 officially launches tomorrow
  • Are C# 9 records actually immutable by default?
  • Last week in the .NET world

.NET 5 officially launches tomorrow

After eight preview releases, two release candidates, and some tears—by me, working on the early preview bits—it’s finally happening: .NET 5 ships tomorrow. Of course, the release candidates shipped with a go-live license but everything becomes official tomorrow. You’ll be able to download official bits and we might even see it working with Azure App Service (no pressure). There’s also .NET Conf, where you’ll get to geek out on .NET 5 for three straight days.

It’s been a long time coming—and it feels a bit longer with all this COVID-19 mess, doesn’t it? Even so, the “Introducing .NET 5” post hit about 18 months ago—and with it, the promise of a unified platform (with Xamarin hitting .NET 6 because of pandemic-related resourcing constraints). We’re talking a single .NET runtime and framework however you’re developing an app, and a consistent release schedule (major releases every November).

Of course, this leads to questions about what it means to say .NET Framework, .NET Standard, and .NET Core—not to mention how the support will work. I covered those not-sexy-but-essential questions two weeks ago.

Since I started this newsletter in May (we’re at Issue #25!), we’ve been dissecting what’s new week by week. (You can look at the archives on my site.) For a condensed version, here’s my favorite things about .NET 5—both big and small. (This is a little like talking about your favorite song or movie, so your opinions might differ.)

Custom JSON console logger

ASP.NET Core now ships with a built-in JSON formatter that emits structured JSON logs to the console. Is this a huge change? No. Will it make my life easier on a daily basis. You know it.

Enhanced dotnet watch support

In .NET 5, running dotnet watch on an ASP.NET Core project now launches the default browser and auto-refreshes on save. This is a great quality-of-life developer experience improvement, as we patiently await for this to hit Visual Studio.

Open API spec on by default for ASP.NET Core projects

When you create a new API project using dotnet new webapi, you’ll see OpenAPI output enabled by default—meaning you won’t have to manually configure the Swashbuckle library and the Swagger UI page is enabled in development mode. This also means that F5 now takes you straight to the Swagger page instead of a lonely 404.

Performance improvements

If you have some time to kill and aren’t scared off by low-level details, I’d highly recommend Stephen Toub’s July post on .NET 5 performance improvements. In short, text operations are 3x-5x faster, regular expressions are 7x faster with multiline expressions, serializing arrays and complex types in JSON are faster by 2x-5x, and much more.

EF Core 5 updates

EF Core 5 is also shipping with a ton of new features. Where to begin? Well, there’s many-to-many navigation properties (skip navigations), table-per-type inheritance mapping, filtered and split includes, general query enhancements, event counters, SaveChanges events, savepoints, split queries for related collections, database collations, and … well, check out the What’s New doc for the full treatment.

Single file apps

Single-file apps are now supported with .NET 5, meaning you can publish and distribute an app in a single executable. Hooray.

Blazor updates

.NET 5 ships with a ton of Blazor improvements, including CSS isolation, JavaScript isolation, component virtualization, toggle event support, switching WebAssembly apps from Mono to .NET 5, lazy loading, server pre-rendering, and so on.

C# 9

I’m super excited for the release of C# 9, especially the embracing of paradigms common in functional programming. With init-only properties and records, C# developers have the flexibility to easily use immutable constructs in a mutable-by-design language. It might take some getting used to, but I love the possibilities. I see a lot of promise in keep the real-world object modeling, but also introducing immutability where treating objects as data makes sense.

(There’s also great improvements with top-level programs, pattern matching, and target typing.)

Are C# 9 records actually immutable by default?

Speaking of records, there seems to be some confusion on if C# 9 records are immutable by default. The answer is: yes, and no. Records are immutable by default when using positional arguments. When you want to use them with object initializers, they aren’t—which makes sense, since initializers are more flexible in how objects are constructed.

Feel free to check out my post for the full details.

Happy birthday, Dad

Someone has turned a lucky … um … well, numbers don’t matter, do they? Happy Birthday, Dad. You can thank him for … waves hands … all this. I’m not sure what I’d be doing without him encouraging my nerdy tendencies during my most awkward years, but I’m glad I’ll never have to find out. Have a good day, mister.

When I said I like to wear many hats, I didn’t mean this one.

Picture with mom

(Don’t worry, this is the last birthday wish. Back to regularly scheduled programming.)

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 ASP.NET Core / Blazor

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

The .NET Rocks podcast discusses Cake 1.0 with Mattias Karlsson.

🎥 Videos

]]>
<![CDATA[ Blast Off with Blazor: Use .NET 5 to update the HTML head from a Blazor component ]]> https://www.daveabrock.com/2020/11/08/blast-off-blazor-update-head/ 608c3e3df4327a003ba2fe6f Sat, 07 Nov 2020 18:00:00 -0600 So far in this series, we’ve walked through a project intro and also got our feet wet with our first component.

Today, we’re going to take a look at a welcome .NET 5 feature in Blazor: the ability to update your HTML head on the fly without the need for JavaScript interoperability.

When I speak of updating the HTML head, I’m referring to what’s inside the <head> tag in your index.html file. You can now use the following native Blazor components to dynamically update the HTML head:

  • <Title> - the title of the document. You’d use this if you want to update your user on any updates, especially if they are busy with other tabs. You’ve probably seen this with updates for new email messages or new chat notifications.
  • <Link> - gives you the ability to link to other resources. For example, you might want to dynamically update this to change stylesheets when the user clicks a button to specify light or dark mode.
  • <Meta> - allows you to specify a variety of information for things like SEO, social cards, and RSS information.

Since these are individual components, you can definitely use them together in your code. Following the email example, you could update the title bar with an unread message count and also change the “you have a message” favicon appropriately.

In this post, we’re going to dynamically update the HTML <title> tag from our Blazor component. And since we’re already working with the browser’s title bar, I’ll walk you through adding favicons to our site.

This post contains the following content.

Dynamically set page title

As I mentioned, we’re going to dynamically update the <title> tag of our component. Before we do that, we’ll first need to add a package reference, add a script reference, and reference the new package in our project.

Add package reference

From the Client directory of our project, open your terminal and add a package reference using the dotnet CLI.

dotnet add package Microsoft.AspNetCore.Components.Web.Extensions --prerelease

Heads up! At this time, the --prerelease flag is required.

Add script reference

In the index.html file in wwwroot, add the following reference:

<script src="_content/Microsoft.AspNetCore.Components.Web.Extensions/headManager.js"></script>

Reference the project

We can use one of two options to reference the new package in our app. Our first option is a direct @using in the Index.razor component:

@using Microsoft.AspNetCore.Components.Web.Extensions.Head

Or, we can include it in our _Imports.razor file. The _Imports.razor file, which sits at the root of our Client project, allows us to reference our app’s imports globally. That way, we won’t have to manually add shared references to all our components that need it. With my new addition, here’s what my _Imports.razor file looks like now:

@using System.Net.Http
@using System.Net.Http.Json
@using Microsoft.AspNetCore.Components.Forms
@using Microsoft.AspNetCore.Components.Routing
@using Microsoft.AspNetCore.Components.Web
@using Microsoft.AspNetCore.Components.Web.Virtualization
@using Microsoft.AspNetCore.Components.WebAssembly.Http
@using Microsoft.JSInterop
@using Client
@using Client.Shared
@using Data
@using Microsoft.AspNetCore.Components.Web.Extensions.Head

Either method works, but I’m going with the latter option for reusability. We’re now ready to set the page title.

Set the page title

Here’s what we’ll do: while I’m waiting for the NASA Astronomy Picture of the Day (APOD) API call to return, our title bar will say Blasting off…. Then, when we get a response back, it’ll say New! From YEAR!, where YEAR is the year of the image. This value comes from the Year property returned by the API.

In the @code section of our Index.razor component, add the following code to parse the year from our Date property:

private int GetYear(DateTime date)
{
    return date.Year;
}

Then, we can use a one-line ternary statement to return a message based on us whether we get a response from the API.

private string GetPageTitle()
{
    return image is null ? "Blasting off..." : $"New! From {GetYear(image.Date)}!";
}

Now, all that’s left is to drop the <Title> component at the top of our markup, and call our logic to change the title text.

<Title Value="@GetPageTitle()"></Title>

Here’s a GIF of our update in action. (I set a Thread.Sleep(1000) for demonstration purposes, but I don’t recommend it in practice obviously.)

Our title bar in action

How it all works

Before .NET 5, updating the HTML head dynamically required the use of JavaScript interoperability.

You’d likely drop the following JavaScript at the bottom of your index.html:

<script>
    function SetTitle(title) {
        document.title = title;
    }
</script>

Then, you’d have to inject JavaScript interop and probably create a special component for it:

@inject IJSRuntime JSRuntime

// markup here...

@code {
    [Parameter]
    public string Value { get; set; }

    protected override async Task OnAfterRenderAsync(bool firstRender)
    {
        await JSRuntime.InvokeVoidAsync("SetTitle", Value);
    }
}

Then, you’d add the component on your page to update the title bar dynamically, where your GetPageTitle method would do what it needs to populate the title.

<PageTitle Value="@GetPageTitle()" />

In .NET 5, this still happens—it’s just abstracted away from you. If you take a look at the source code for the <Title> component, it injects an IJSRuntime and takes a Value—just as you’ve done before. Then, in the OnAfterRenderAsync component lifecycle method, it sets the title by calling the headManager.js file, where you’ll see:

export function setTitle(title) {
  document.title = title;
}

Note: The OnAfterRenderAsync method is called after a component has finished rendering.

So, this isn’t doing anything too magical. It’s doing it out-of-the-box and giving you one less component to maintain.

With that done, let’s add favicons to our app.

Add favicons to our app

When you create a Blazor app (or any ASP.NET Core app, for that matter), a bland favicon.ico file is dropped in your wwwroot directory. (For the uninitiated, a favicon is the small icon in the top-left corner of your browser tab, right next to the page title.) This might lead you to believe that if you want to use favicons, you can either use that file or overwrite it with one of your own—then move on with your life.

While you can do that, you probably shouldn’t. These days you’re dealing with different browser requirements and multiple platforms (Windows, iOS, Mac, Android) where just one favicon.ico will not get the job done. For example, iOS users can pin a site to their homescreen—and iOS will grab your favicon icon as the “app” image.

Thanks to Andrew Lock’s wonderful post on the topic, I went over to the RealFaviconGenerator for assistance. (If you want more details on the process, like how to create your own, check out his post.)

Before we do that, we’ll need to pick a new image.

Did you know the .NET team has a branding GitHub repository where you can look at logos, presentation templates, wallpapers, and a bunch of illustrations of the purple .NET bot? For our example, we’ll use the bot using a jetpack because—obviously. We’re blasting off, after all.

Let’s also update our header icon to this image, too. After dropping the file in our wwwroot/assets/img directory, we can edit our NavBar component in our Shared directory to include our new file.

<nav class="flex items-center justify-between flex-wrap bg-black p-6 my-bar">
    <div class="flex items-center flex-shrink-0 text-white mr-6">
        <img src="images/dotnet-bot-jetpack.png">
        <span class="font-semibold text-xl tracking-tight">Blast Off with Blazor</span>
    </div>
    <! -- more html left out for brevity -->
</nav>

I feel better and, as a bonus, the fine folks at NASA don’t have to worry about asking me to take their official logo down.

Our title bar in action

Now, we can head over the RealFaviconGenerator site to upload our new icon. After adjusting any settings to our liking, it’ll give us a bunch of icons, which we’ll unzip to the root of our wwwroot directory.

In index.html in the wwwroot directory, add the following inside the <head> tag:

<link rel="apple-touch-icon" sizes="76x76" href="/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png">
<link rel="manifest" href="/site.webmanifest">
<link rel="mask-icon" href="/safari-pinned-tab.svg" color="#5bbad5">
<meta name="msapplication-TileColor" content="#da532c">
<meta name="theme-color" content="#ffffff">

While it’s not the worst idea to put this markup in a component or partial view, I’m fine with putting it right into my static index.html file—it’s a one-time step and this is the only file that needs it.

What about the tests?

I’ve placed an emphasis on testing our components with the bUnit library. We’ll work on testing this, and the rest of our Index component, in the next post. We need to mock a few things, like HttpClient, and I’m currently researching the best way to do so. Our next post will focus on testing Blazor components with HttpClient dependencies.

Wrap up

In this post, we dynamically updated the HTML head by setting the page title based on the image. Then, we updated our header icon and also showed how to include favicons in a Blazor WebAssembly project.

Stay tuned for the next post where we work on writing tests for our Index component.

Resources

]]>
<![CDATA[ The .NET Stacks #24: Blazor readiness and James Hickey on Coravel ]]> https://www.daveabrock.com/2020/11/06/dotnet-stacks-24/ 608c3e3df4327a003ba2fe6e Thu, 05 Nov 2020 18:00:00 -0600 RIP, James Bond. While watching Goldfinger last night, I forgot about the best feature of Connery Bond films: if you’re confused about the plot, the villain will explain it all to you right before the movie ends. Ah, the original “let’s have retro before the sprint ends.”

This week:

  • Is Blazor ready for the enterprise?
  • Dev Discussions: James Hickey
  • Last week in the .NET world

Is Blazor ready for the enterprise?

If you’re writing .NET web apps, you’ve likely aware of all the Blazor hype. (If you haven’t, it allows you to write web UIs in C#, where you’ve traditionally had to use JavaScript.) In the last two years, I’ve seen it morph from a cute Microsoft project to something they’ve deemed production-ready.

And with .NET 5—becoming generally available next week!—we’re seeing a lot of great improvements with Blazor: CSS isolation, JavaScript isolation, component virtualization, toggle event support, lazy loading, server-side prerendering, and so on. These improvements help Blazor catch up to base features of leading SPA frameworks, like Vue, React, and Angular.

If you’re slinging code for a decent-sized company, you might be wondering if Blazor is ready for the enterprise. Can you convince your pointy-haired bosses to use it for new, and maybe existing, apps? I think it’s ready. However, this isn’t an “easy yes”—it involves a nuanced answer. We’ll answer this question by answering some common questions.

Is it another Silverlight? If you’ve worked in Blazor for a little while, it’s a tired argument but one you’ll have to answer from those not in the day-to-day .NET world. While a similar concept—using C# instead of JavaScript to build web apps—Blazor is built on web standards and is not built with proprietary tech that can be suddenly thrown away. It isn’t a browser plugin like Silverlight.

How will it help teams deliver faster? Blazor lowers the front-end learning curve normally associated with JavaScript, and allows devs to use their language and their tools to get things done. Blazor doesn’t replace JavaScript. However, if you’re in a shop with mostly C# devs, it’s an obvious productivity improvement. This isn’t a quick on/off switch, as teams still need to get familiar with core SPA concepts, but the use case for .NET shops is clear.

Is it supported with a good ecosystem? Because Blazor lives in the .NET ecosystem, it comes with official Microsoft support just like any other product (and, last week, we discussed .NET 5 support). Additionally, Microsoft continues to devote significant investment in it and has a long history of backwards compatibility. The ecosystem is not as evolved as it is with Angular and React—they’ve had a head start—but is growing tremendously. As Peter Vogel mentioned, Blazor already has 25% of the interest that Vue has (from Google Trends).

Does it perform well? Compared to other SPA frameworks, Blazor’s performance is … fine? It’ll pass the eye test with Vue and whatnot in most cases but there will be some waiting—Blazor Web Assembly has a large-ish download size (as its loading .NET in the browser), and Blazor Server has a network hop for each user interaction. The team has made a lot of progress in addressing performance, and AOT compilation is the most popular request for ASP.NET in .NET 6 (and will impact non-Blazor apps in ASP.NET as well). If you’re dealing with huge amounts of data, you might want to wait for these improvements—but should be suitable in most business cases.

Dev Discussions: James Hickey

When designing a real production system—especially over the web—there’s a lot to think about past coding features for stakeholders. For example, how do you manage events? What about scheduling tasks? What about your caching mechanism? A lot of these considerations make you wonder if they should be supplied out-of-the-box with ASP.NET Core.

James Hickey’s Coravel project is here to assist. Billed as a “near-zero config .NET Core library that makes task scheduling, caching, queuing, mailing, event broadcasting, and more a breeze,” it ships with an impressive set of free features—and you can look at Coravel Pro for a professional admin panel, UI management, health metrics, and more.

I caught up with James to talk about his project, software design, and more.

James Hickey

As the author of Refactoring TypeScript, what are your biggest tips on keeping a codebase healthy?

  • Simplicity rules the day. If you “feel” like your solution is getting complicated, then step back or take a 15-minute walk!
  • Reuse is over-hyped. It leads to coupling. Coupling more often-than-not causes trouble down the road.
  • Abstract I/O from other kinds of logic. Don’t do direct database or file system access from the core logic in your system. Put that behind an interface.
  • If you want to get good at designing code, then get good at knowing how to build unit tests that have minimal (or no) mocking/stubbing.
  • Use automated quality analyzers to fail builds when code standards are violated.
  • Fix broken windows.

Let’s talk about your Coravel project. I love your bold selling point: “Near-zero config .NET Core micro-framework that makes advanced application features like Task Scheduling, Caching, Queuing, Event Broadcasting, and more a breeze!” Can you walk through what caused you to pursue this project, how Coravel can help me, and how it works with existing .NET projects?

Back in the day when I was building .aspx WebForms, I discovered the PHP framework Laravel and loved how it included all these additional features like queuing, task scheduling, and so on out of the box. When .NET Core was gaining traction, I found myself wanting some of the conveniences of Laravel but in .NET Core.

Coravel can help you get a new project up-and-running faster by including things like automated task scheduling, background task queuing, and so on out of the box without having to install extra infrastructure. It works with any .NET Core projects and is (supposed to be) super easy to install, configure and use.

For scheduling and event management, I know a lot of folks like utilizing things like triggered Azure Functions and Service Bus and the like—is there an advantage to using Coravel over that?

One disadvantage is that Coravel doesn’t (yet!) support distributed scheduling, queuing, and caching features. One advantage though, again, is you don’t have to install any additional infrastructure. You can install a NuGet package, add a couple lines of code and you’re good-to-go!

You can use Coravel and these other products together, too. For example, we often use a distributed service bus to integrate separate systems via async messaging. Coravel can support the internal integration within a system—like how you might emit domain events between aggregates. Doing this in-memory has certain advantages. Jimmy Bogard, for example, has written about using in-memory domain event processing.

I know by default, Coravel caching is in-memory (but has options to persist to a database). How does this compare to something like Redis?

Again, using Coravel you won’t have to muck around with installing, configuring, and so on with that additional infrastructure. Redis is a fantastic product and I’m hoping to eventually offer providers for Coravel to hook into Redis—which could make using Redis caching, messaging, and so on even easier. I just need to find some time 😉.

Do you view Coravel as an enhancement over the BCL, or filling in for some of its shortcomings? This is part of a broader thought about OSS’s role within .NET Core.

I view Coravel as enhancements over the BCL. I wouldn’t expect to see these features included in a barebones console application, for example. But when it comes to web applications, these are all features that you’ll need pretty much out-of-the-gate if it’s anything more than a demo application.

I’m a fan of your “Loosely Coupled Show” that you run with Derek Comartin (sure beats the “Tightly Coupled Show”)—a great spot to discuss software architecture and design. In such a fluid industry, are there any firm principles you are stubborn about?

Generally, I stick with using a feature folder approach to structuring any projects I work on. I’ve written about this before and Derek just wrote about it. I’m a huge fan of CQRS and think it fits perfectly with the feature folders approach.

I’ve given a talk on these topics at the Florida .NET meetup if anyone’s interested in hearing more about it.

Check out the full interview at my website.

What is your one piece of programming advice?

Make sure you get lots of sleep and step outside once in a while. 😅

🌎 Last week in the .NET world

🔥 The Top 5

📢 Announcements

📅 Community and events

😎 ASP.NET Core / Blazor

⛅ The cloud

📔 C#

🔧 Tools

📱 Xamarin

🎤 Podcasts

The 6-Figure Developer podcast talks about managing cloud cost with Omry Hay.

🎥 Videos

]]>
<![CDATA[ Are C# 9 records immutable by default? ]]> https://www.daveabrock.com/2020/11/02/csharp-9-records-immutable-default/ 608c3e3df4327a003ba2fe6d Sun, 01 Nov 2020 18:00:00 -0600 I’m seeing a lot of people excited about records in C# 9, myself included. I wrote about records in excruciating detail this summer: but the gist is that records are types that allow you to perform value-like behaviors on properties with the promise of immutability. They support with expressions, inheritance, and positional records, too. We can finally enforce immutability in C# without a bunch of hacky boilerplate.

This begs the question: can you just declare a record type and get immutability right away? I’m hearing some people saying records are immutable by default, and some are saying they aren’t. Both statements are correct. Let’s explain why.

Records using positional syntax are immutable by default

If you typically create objects using the positional style—by using constructors to create an object by passing an argument list—then, yes, records are immutable by default.

For example, using the positional style you can declare a Person record in just one line of code.

record Person(string FirstName, string LastName);

Then, in the Main method I can do something like this:

static void Main(string[] args)
{
    var person = new Person("Dave", "Brock");
    Console.WriteLine($"I'm {person.FirstName} {person.LastName}");
}

Most importantly, if I try to mutate my FirstName property after initialization…

person.FirstName = "Bob";

…my compiler gets upset, with the error: Init-only property or indexer 'Person.FirstName' can only be assigned in an object initializer, or on 'this' or 'base' in an instance constructor or an 'init' accessor.

So, yes, if you are using positional records, they are immutable by default. From here, you are free to argue with your team about whether you should declare a bunch of one-line types in your .cs files.

Records using nominal creation aren’t immutable by default

There’s another way to work with records in C# 9. You can declare a record with traditional getters and setters, that you’ve likely been doing forever.

public record Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

Here you’re using nominal creation by using object initializer syntax. When you do this in C# 9, there’s nothing stopping you from doing something like this, where you can mutate properties after instantiation:

static void Main(string[] args)
{
    var person = new Person
    {
        FirstName = "Dave",
        LastName = "Brock"
    };
    Console.WriteLine($"I'm {person.FirstName} {person.LastName}"); // I'm Dave Brock
    person.FirstName = "Bob";
    Console.WriteLine($"I'm {person.FirstName} {person.LastName}"); // I'm Bob Brock
}

Using nominal creation, then, we can say that records are not immutable by default.

Of course, making this immutable is simple work by changing set to init:

public record Person
{
    public string FirstName { get; init; }
    public string LastName { get; init; }
}

When I do this, the compiler will complain again about mutating a property after instantiation: Init-only property or indexer 'Person.FirstName' can only be assigned in an object initializer, or on 'this' or 'base' in an instance constructor or an 'init' accessor.

I can see myself using both positional and nominal methods. I’d use nominal methods when I want the entire record to be immutable, and nominal creation when I want to more particular about which properties are immutable and which are not. For example, in this silly and simple example, I could keep the FirstName immutable but not the LastName (making room for marriages and divorces):

record Person
{
    public string FirstName { get; init; }
    public string LastName { get; set; }
}

There are plenty of developers who will argue that records should be completely immutable—but in a natural mutable language like C#, the option for both exists by design.

Whatever you use, records are better than a bunch of buggy ceremony

Whatever method you choose, it’s much better than trying to enforce immutability in C# 8 with a readonly struct or classes that implement the IEquatable interface.

For example, in C# 8 we’d write the following ceremony to implement IEquatable:

class Person : IEquatable<Person>
{
    public Person(string firstName, string lastName)
    {
        FirstName = firstName;
        LastName = lastName;
    }

    public string FirstName { get; set; }
    public string LastName { get; set; }

    public bool Equals(Person other)
    {
        // write some Equals override to compare 
        //   from data in the object and not reference itself
        throw new NotImplementedException();
    }
}

To enforce value-like equality, we’ll need to write our own Equals override and probably will end up overriding GetHashCode and ToString, too. With records, we can use either the nominal or positional convention to remove a lot of ceremony—not to mention the inevitable bugs when we forget to update the code with any new properties we add to our object.

Wrap up

In this post, we talked about when C# 9 records are immutable by default, and when they aren’t. We also showed how either approach beats the buggy ceremony we were forced to create in C# 8 or earlier.

]]>
<![CDATA[ Dev Discussions - James Hickey ]]> https://www.daveabrock.com/2020/11/01/dev-discussions-james-hickey/ 608c3e3df4327a003ba2fe6c Sat, 31 Oct 2020 19:00:00 -0500 This is the full interview from my discussion with James Hickey in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today!

When designing a real production system—especially over the web—there’s a lot to think about past coding features for stakeholders. For example, how do you manage events? What about scheduling tasks? What about your caching mechanism? A lot of these considerations make you wonder if they should be supplied out-of-the-box with ASP.NET Core.

James Hickey’s Coravel project is here to assist. Billed as a “near-zero config .NET Core library that makes task scheduling, caching, queuing, mailing, event broadcasting, and more a breeze,” it ships with an impressive set of free features—and you can look at Coravel Pro for a professional admin panel, UI management, health metrics, and more.

I caught up with James to talk about his project, software design, and more.

(You can reach out to James Hickey through Twitter, browse to the Coravel GitHub repo, and check out his site.)

James Hickey

What are you up to these days? What’s your main gig like?

I’m currently working full-time as a senior .NET developer at a small SaaS company—this will be changing come the new year 😉.

I’m mostly working on building a mobile API and application using .NET Core and Ionic/Angular for the product. There’s also a lot of refactoring involved—hurray for legacy technology!

I also do some contract stuff on the side.

As the author of Refactoring TypeScript, what are your biggest tips on keeping a codebase healthy?

  • Simplicity rules the day. If you “feel” like your solution is getting complicated, then step back or take a 15-minute walk!
  • Reuse is over-hyped. It leads to coupling. Coupling more often-than-not causes trouble down the road.
  • Abstract I/O from other kinds of logic. Don’t do direct database or file system access from the core logic in your system. Put that behind an interface.
  • If you want to get good at designing code, then get good at knowing how to build unit tests that have minimal (or no) mocking/stubbing.
  • Use automated quality analyzers to fail builds when code standards are violated.
  • Fix broken windows.

Let’s talk about your Coravel project. I love your bold selling point: “Near-zero config .NET Core micro-framework that makes advanced application features like Task Scheduling, Caching, Queuing, Event Broadcasting, and more a breeze!” Can you walk through what caused you to pursue this project, how Coravel can help me, and how it works with existing .NET projects?

Back in the day when I was building .aspx WebForms, I discovered the PHP framework Laravel and loved how it included all these additional features like queuing, task scheduling, and so on out of the box. When .NET Core was gaining traction, I found myself wanting some of the conveniences of Laravel but in .NET Core.

Coravel can help you get a new project up-and-running faster by including things like automated task scheduling, background task queuing, and so on out of the box without having to install extra infrastructure. It works with any .NET Core projects and is (supposed to be) super easy to install, configure and use.

For scheduling and event management, I know a lot of folks like utilizing things like triggered Azure Functions and Service Bus and the like—is there an advantage to using Coravel over that?

One disadvantage is that Coravel doesn’t (yet!) support distributed scheduling, queuing, and caching features. One advantage though, again, is you don’t have to install any additional infrastructure. You can install a NuGet package, add a couple lines of code and you’re good-to-go!

You can use Coravel and these other products together, too. For example, we often use a distributed service bus to integrate separate systems via async messaging. Coravel can support the internal integration within a system—like how you might emit domain events between aggregates. Doing this in-memory has certain advantages. Jimmy Bogard, for example, has written about using in-memory domain event processing.

I know by default, Coravel caching is in-memory (but has options to persist to a database). How does this compare to something like Redis?

Again, using Coravel you won’t have to muck around with installing, configuring, and so on with that additional infrastructure. Redis is a fantastic product and I’m hoping to eventually offer providers for Coravel to hook into Redis—which could make using Redis caching, messaging, and so on even easier. I just need to find some time 😉.

Do you view Coravel as an enhancement over the BCL, or filling in for some of its shortcomings? This is part of a broader thought about OSS’s role within .NET Core.

I view Coravel as enhancements over the BCL. I wouldn’t expect to see these features included in a barebones console application, for example. But when it comes to web applications, these are all features that you’ll need pretty much out-of-the-gate if it’s anything more than a demo application.

I’m a fan of your “Loosely Coupled Show” that you run with Derek Comartin (sure beats the “Tightly Coupled Show”)—a great spot to discuss software architecture and design. In such a fluid industry, are there any firm principles you are stubborn about?

Generally, I stick with using a feature folder approach to structuring any projects I work on. I’ve written about this before and Derek just wrote about it. I’m a huge fan of CQRS and think it fits perfectly with the feature folders approach.

I’ve given a talk on these topics at the Florida .NET meetup if anyone’s interested in hearing more about it.

Otherwise, the tips I had for one of the other questions apply to this question too 👍.

What is your one piece of programming advice?

Make sure you get lots of sleep and step outside once in a while. 😅

]]>
<![CDATA[ The .NET Stacks #23: .NET 5 support, migration tools, and links ]]> https://www.daveabrock.com/2020/10/31/dotnet-stacks-23/ 608c3e3df4327a003ba2fe6b Fri, 30 Oct 2020 19:00:00 -0500 Get yourself a teammate that looks at your code like Prince William looks at KFC.

Here’s what we have this week:

  • Understand the release cycles of .NET
  • AWS releases open-source migration tool
  • An update on ASP.NET Core feedback for .NET 6
  • Last week in the .NET world

⏲ Understand the .NET release cycles and support

It’s hard to believe that two weeks from tomorrow, .NET 5 will be generally available. Week by week, we’ve been poking around the improvements between general .NET, ASP.NET Core and Blazor, and EF Core 5, which are all released together now. While the release candidates offer production licenses, on November 10 things will get real.

To that end, you might be wondering when future releases will occur, how Microsoft is supporting .NET 5, and what that means for older versions of .NET Core and .NET Framework. (This isn’t as sexy as Blazor, I suppose, but is important to understand as you plan your app’s future.)

In terms of future releases, new major releases will be released every November.

.NET release schedule

As you can see, “LTS” and “Current” support will alternate between releases. As a refresher, let’s walk through those.

  • Long Term Support (LTS) releases - supported for a minimum of three years (or 1 year after the next LTS ships, whatever is longer). The current LTS releases are .NET Core 2.1 (with end of support on August 21, 2021), and .NET Core 3.1 (December 3, 2022).
  • Current releases - supported for 3 months after the next major or minor release ships

.NET 5 is a “Current” release. Typically, you’d have three months to upgrade to 5.1 but Microsoft says they’re no longer doing point releases. When it comes to .NET 6, it’ll be LTS (supported for three years after general availability)—giving you a less hands-on approach. There’s a Twitter discussion on the benefits and drawbacks.

Microsoft has published an official .NET 5 support policy if you want to explore this further.

What about .NET Framework? While .NET Framework is not dead, it’s basically done. All new features will be built on .NET Core. With .NET Framework 4.8 being the last major release for .NET Framework, it’ll only be supported with reliability and security fixes. You aren’t being forced to move .NET Framework apps to .NET Core, and there are no plans to remove .NET Framework from Windows, but you won’t see any feature improvements.

🔨 AWS open-sources .NET Core migration tool

This week, Amazon Web Services (AWS) open-sourced their Porting Assistant for .NET tool, which assists you in porting your .NET Framework apps to .NET Core on Linux. It was first released in July.

AWS says their differentiator is assessing the entire tree of package dependencies and functionality like incompatible APIs—it uses solution files as the starting point, preventing the need to analyze individual binary files. Interestingly, this single tool appears to be more robust than Microsoft’s migration tools.

This begs the question: what does Microsoft offer? For their part, Microsoft has a comprehensive document about migrating from .NET Framework to .NET Core. They also have a list of tools you can utilize.

  • The .NET Portability Analyzer, which analyzes your code and provides a report about how portable your code is between Framework and Core
  • The .NET API Analyzer, a Roslyn analyzer that detects compatibility in cross-platform libraries
  • The Platform Compatibility Analyzer, another Roslyn analyzer that informs devs when using platform-specific APIs from call sites where API might not be supported
  • A try-convert utility, a .NET Core global tool that helps convert from the project.json model to the .csproj model. Microsoft makes clear it is not supported in any way.

It reminds me of the old American football adage: if you have two quarterbacks, you have none. I’ve heard a common question: shouldn’t Microsoft be leading this charge since … they own .NET and are the most familiar with the platform? Why haven’t they invested in robust migration tooling? I see a few factors at play here.

Considering .NET is a free product, the Microsoft play is to get you on Azure. What’s to prevent you from using their awesome conversion tool and heading over to the competition? From AWS’s standpoint, they’re quite incentivized to do this: the sell for .NET devs is hard enough and they need to make things as easy as possible. If you couple that with the complexities of this tooling, Microsoft seems to be a little leery of making any big investments in that space.

📓 An update on ASP.NET Core feedback for .NET 6

A few weeks ago, I wrote about the GitHub issue you can visit to chime in on the future of ASP.NET Core for .NET 6. I’d like to call out a comment by  Muhammad Rehan Saeed that outlines core ASP.NET issues that aren’t Blazor related.

While Blazor is awesome and the future of ASP.NET Core, there are tons of ASP.NET developers who aren’t using it for various reasons and are fine with using MVC or Razor Pages. If you’re interested in providing feedback on those issues, look at Muhammad’s comment—personally, I’m interested in HTTP/3 support, streaming API support for MVC, and C# 8 nullable reference type support.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 ASP.NET Core / Blazor

⛅ The cloud

📔 C#

📗 F#

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Blast Off with Blazor: Learn components and testing with a custom 404 page ]]> https://www.daveabrock.com/2020/10/28/blast-off-blazor-404-page/ 608c3e3df4327a003ba2fe6a Tue, 27 Oct 2020 19:00:00 -0500 I hope you enjoyed the introduction to our Blast Off with Blazor project. It’s time to get to work!

We’re going to write our first reusable component, and use Egil Hansen’s wonderful bUnit Blazor component testing library to confirm it renders appropriately. I strongly believe testing should never be an afterthought, and we should start testing from Day 1.

To do all this, we’ll write a custom 404 error page. This first one will be simple but will allow us to understand how basic components work.

This post contains the following content:

Understand the App.razor file

Obviously, we know that if the user navigates to a page that doesn’t exist, like /somepage, that we’ll want to customize the error page. But where does Blazor figure out what pages are valid?

The answer lies in the App.razor file, which sits at the root of our Blazor Web Assembly project.

You’ll notice the default App.razor file looks something like this:

<Router AppAssembly="@typeof(Startup).Assembly">
    <Found Context="routeData">
        <RouteView RouteData="@routeData" DefaultLayout="@typeof(MainLayout)" />
    </Found>
    <NotFound>
        <p>Sorry, there's nothing at this address.</p>
    </NotFound>
</Router>

Here, you’ll see Blazor’s component model in action—even Blazor’s routing mechanism uses components! We see a Router component, a Found component, and a NotFound component. The Router and Found components also take parameters. We’ll dig into component parameters in a future post.

Last time, we learned that you pass an @page directive at the top of your component. For example, our Index.razor component will look like this:

@page "/"

This is a route template. When you compile this component, the generated class provides a RouteAttribute with this route template. Then, at runtime, the RouteView component receives this routeData from the Router component, then renders the component with its layout. If you don’t want to specify a layout for each component—pro tip: you probably don’t—you’ll pass a default one. By default, it’ll pass Shared/MainLayout.razor.

For our purposes, we see that the NotFound component decides what to render when a component can’t be found. It passes our default layout—which we want to keep, to preserve our header—and a message that says Sorry, there's nothing at this address. How boring. Let’s make things a little more exciting.

For more information on how routing works in Blazor, read the Microsoft Docs article.

Our first shared component

Let’s create our first shared component, called NotFoundPage.razor. Create that file in the Shared directory of our project. The Shared directory is a good place for components we’ll use throughout our app. (In Visual Studio 2019, you can right-click the Shared directory and select Add > Razor Component.)

As we’ve said, a component can be really anything—a page, a part of a page, a button, form, whatever—this will be some static HTML, with a button a user can click to go to our home page.

The following markup displays a “Houston, we have a problem” message with a funny picture and a message. If you don’t understand the CSS classes, that’s OK—that’s me tweaking Tailwind CSS.

<div class="flex justify-center">
    <div class="max-w-md rounded overflow-hidden shadow-lg m-12">
        <h1 class="text-4xl m-6">Houston, we have a problem</h1>
        <img class="w-full" src="images/we-have-problem.jpg" />

        <p class="m-4">
            We couldn't find what you're looking for. Maybe it was a typo,
            or maybe we did something wrong. Whatever the case, you should probably go somewhere else.
        </p>
    </div>
</div>

Now, we can add the button that takes a user back to our home page. We first need to add the logic to do so.

To work with URIs in Blazor, you’ll utilize the NavigationManager class. We can use the NavigateTo method.

In the bottom of the file, under all the markup, add a @code block with the following one-liner:

@code {
    void BackHome() => Navigator.NavigateTo("/");
}

Let’s wire up the button to call this function—our first event! In the button markup add @onclick="BackHome", like so:

<button class="text-center m-4 bg-red-500 hover:bg-red-700 text-white font-bold py-2 px-4 rounded" @onclick="BackHome">
    🚀 Back to Mission Control
</button>

There are so many events to work with—in our case, we’re just calling a function when the button is clicked. Make sure to include the @ as this is an ASP.NET Core directive, and not a JavaScript one. You’ll also notice you’ll just need to pass the method name.

With that done, head back to App.razor and add our new component to the NotFound section.

<NotFound>
    <LayoutView Layout="@typeof(MainLayout)">
        <NotFoundPage />
    </LayoutView>
</NotFound>

Now, fire up your app and enter a silly path like http://localhost:5000/kittykittylicklick and change the NotFound section to the following.

Our 404 page

With our first component under our belt, let’s discuss testing our component.

Test our component with the bUnit library

Thanks to Egil Hansen, we can use the bUnit library to test our Blazor components. You can use bUnit to test everything about your components, no matter the complexity: you can test renders, event handlers, component state, async changes, mocking dependencies, and more.

Check out the bUnit documentation to learn more.

Set up your testing project

First, we’ll need to set up our test project. The quickest way is through the dotnet CLI, so fire up your terminal and navigate to your project root (where your solution file is).

Perform a one-time task to install the bUnit template. Right now, this is built upon the xUnit testing framework.

dotnet new --install bunit.template::1.0.0-beta-11

With that installed, create your test project. We’ll call it Test.

dotnet new bunit -o Test

Then, add the project to your solution:

dotnet sln ImageOfDay.sln add Test/Test.csproj

Finally, add a reference between the core project and your test project:

dotnet add Test/Test.csproj reference Client/Client.csproj

Head back to Visual Studio. If you refresh the app in Solution Explorer, you should see your Test project.

Write the test

We’re now ready to write our first test. Create a new class in our Test project, called NotFoundTest.cs.

In our test method, we’ll set up our tests to initialize a TestContext and render our component in question:

[Fact]
public void NotFoundComponentRendersCorrectly()
{
    using var ctx = new TestContext();
    var cut = ctx.RenderComponent<NotFoundPage>();
}

I’m not going to test for every HTML character in our component. Let’s make sure the heading and button text is rendered.

// setup code
var h1Element = cut.Find("h1").TextContent;
var buttonElement = cut.Find("button").TextContent;

h1Element.MarkupMatches("Houston, we have a problem");
buttonElement.MarkupMatches("🚀 Back to Mission Control");

We first find the text content of our h1 and button tags. By text content, I mean the text inside the tags. This allows us to only check for the text and not have to hard code the opening and closing tag and all the CSS classes.

Here’s how our complete test method looks:

[Fact]
public void NotFoundComponentRendersCorrectly()
{
    using var ctx = new TestContext();
    var cut = ctx.RenderComponent<NotFoundPage>();
    var h1Element = cut.Find("h1").TextContent;
    var buttonElement = cut.Find("button").TextContent;

    h1Element.MarkupMatches("Houston, we have a problem");
    buttonElement.MarkupMatches("🚀 Back to Mission Control");
}

The bUnit library works great with existing tooling. Using your favorite method, run your test (like Test > Run All Tests in Visual Studio), and you’ll see the results in Test Explorer.

Our 404 page

Wondering about testing the button click? We can test the button click with bUnit, but involves injecting the NavigationManager from our test class. It’s not documented yet, but there is a solution available. Injecting and mocking services is not a light topic, so I’ll reserve that for a future post.

Wrap up

In this post, we got our feet wet with our first component—a 404 page. We understood how routing works by understanding the App.razor file, created a NotFoundPage component, and introduced event handling with a button click. We then tested our new component with the bUnit Blazor component testing library.

I pushed all code for this post to the GitHub repository. See you next time!

]]>
<![CDATA[ Blast Off with Blazor: Get to know Blazor and our project ]]> https://www.daveabrock.com/2020/10/26/blast-off-blazor-intro/ 608c3e3df4327a003ba2fe69 Sun, 25 Oct 2020 19:00:00 -0500 A couple of weeks ago, I wrote about deploying an Azure Static Web App with Blazor and Azure Functions. In that post, I talked about using the app I built as a base for an upcoming Blazor series. This is it!

We’ll be building our site from the ground up with no Blazor experience required. All you’ll need is a passing knowledge of C# and .NET. If you don’t have that, no problem: brush up on some C# (I enjoy this Learn module) and come back when you’re ready.

This post includes the following content.

What is Blazor?

If you’ve worked in the .NET ecosystem, it’s been hard to ignore the buzz around Blazor. If you’re new to Blazor or even .NET, a refresher is in order.

Blazor is a front-end (UI) framework using a single-page application model. These days, think of JavaScript libraries like React or Vue. With Blazor, you can build interactive UIs using C#. You do this by building reusable components using C#, CSS, and HTML. For when you can’t use C#, like working with a browser’s local storage, JavaScript interopability is built in. (And if you truly are allergic to  JavaScript, there’s several community projects that can help you–like Blazored.)

Hosting models

If you select a Blazor project template in Visual Studio, you’ll need to pick a hosting model. Blazor offers two hosting models: Blazor Web Assembly and Blazor Server.

Blazor Web Assembly

With Blazor Web Assembly—which we’ll be using in this project!—Blazor and its dependencies (including the .NET runtime!) are downloaded to the browser. The entire app executes on the browser UI thread. All app aseets, typically in wwwroot, are deployed as static files.

If you look at the footer of your generated HTML, you’ll see it uses blazor.webassembly.js. This file initializes the app’s runtime after downloading the app and its dependencies.

I’ve stolen borrowed this diagram from the Microsoft doc on the subject:

Blazor web assembly diagram

The good

  • No required ASP.NET Core server or dependency on .NET server-side, meaning a serverless deployment scenario (like for this app!) is possible
  • Work is completely offloaded to the client
  • Docker support

The not so good

  • The browser is your runtime, so I hope you like your browser
  • Download size is larger, so apps take much longer to load
  • Tooling support is not as great (but improving)

Blazor Server

With Blazor Server, your app executes on the server from an ASP.NET Core app. Any UI updates, JavaScript calls, and event handling, is handled over a persistent SignalR connection.

I’ve once again stolen borrowed this diagram from the Microsoft doc on the subject:

Blazor Server diagram

The good

  • Small download size and fast loading time
  • You get to leverage .NET Core APIs and tooling, and share models between the client and server
  • Docker support
  • You don’t have to worry about WebAssembly support (although this will be less a concern), or devices that don’t have a lot of resources to work with

The not so good

  • High latency, since a network call is required for each interaction
  • An ASP.NET Core server is required
  • No offline support like Blazor Web Assembly

We’ll learn much more about Blazor in great detail as we improve our application. So, what is this application? Keep reading for details.

Blasting off with our project

As written about previously, I’ve built an app called Blast Off With Blazor. You can see a live version at blastoffwithblazor.com and also clone the GitHub repository.

When the application loads, it calls an Azure Function. The Function, in turn, calls NASA’s Astronomy Picture of the Day (APOD) API to get a picture that is out of this world—literally. The APOD site has been serving up amazing images daily since June 16, 1995. Every time you reload the app, I’ll fetch a random image anytime between that start date and today.

blast off with blazor site

Right now, it’s super simple and super slow. We’ll fix that in upcoming posts as we learn Blazor together. As we build on it, we’ll be using additional awesome NASA APIs as we learn all about …

  • CSS and JavaScript isolation
  • Data binding
  • Event handling
  • State management
  • Routing
  • Configuration
  • Testing
  • PWAs

… and so on.

For now, let’s look at the code.

Our first code review

I wanted to start with some basic functionality. Let’s look through what I set up for you.

The API project

Let’s first look at the Api project as, for now, it’s pretty simple. As I said earlier, we’re going to call the NASA APOD API. If you don’t pass a date to the API it’ll return the latest photo. I preferred to fetch a random one.

So, in ImageGet.cs, I wrote a helper GetRandomDate() function. This returns a date between June 16, 1995 (when the API started) and today.

private static string GetRandomDate()
{
    var random = new Random();
    var startDate = new DateTime(1995, 06, 16);
    var range = (DateTime.Today - startDate).Days;
    return startDate.AddDays(random.Next(range)).ToString("yyyy-MM-dd");
}

Now that we have our date, we can work on our Azure Function.

[FunctionName("ImageGet")]
public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "image")] HttpRequest req,
    ILogger log)
{
    log.LogInformation("Executing ImageOfDayGet.");

    var apiKey = Environment.GetEnvironmentVariable("ApiKey");
    var response = await httpClient.GetAsync($"https://api.nasa.gov/planetary/apod?api_key={apiKey}&hd=true&date={GetRandomDate()}");
    var result = await response.Content.ReadAsStringAsync();
    return new OkObjectResult(JsonConvert.DeserializeObject(result));
}

There’s actually quite a bit going on in the method signature. We start with defining an HttpTrigger, which means it’ll execute when our app calls it. We pass in an AuthLevel of Anonymous, which means the consuming app doesn’t have to pass in a function-specific API key. The Route signifies we’ll only be using GET calls, and the Route of image defines the route template (it’ll respond to api/image calls).

In the body of our method, we get our API key from our configuration and call the API—passing in our key and the date. We’ll also elect to receive HD images. Once we get a result back, we’ll deserialize it so we can pass it back to the caller.

We’ll be focusing on Blazor, obviously, and not Azure Functions. This is probably the most we’ll get into Azure Functions in these posts. If you want to learn more about Azure Functions, have at it.

The Blazor project

If you navigate to the Client project, you’ll see just one page currently in the project, under the Pages directory: Index.razor.

This .razor file is a component. In Blazor, a component is a “chunk” or “part” of the UI (like a form, page, or even something as simple as a button). In our case, the page is our component. Eventually, we’ll want to change this to make it more reusable. We’ll get there.

These component files use Razor, a combination of C# and HTML. This is very similar to what’s implemented in ASP.NET MVC or Razor Pages.

The route template

The first thing you’ll see at the top of the page is this:

@page "/"

This @page attribute signifies a route template. In our case, the root of the app. That’s all we need to know for now, but it can get a lot more complicated: you can even apply multiple route templates for a component.

The @code block

Skipping down to the bottom of the file, you’ll see a @code block—this is where we’ll define our C# code to associate with our component. As we make more complex components, we can even move this code to separate C# classes.

For now, let’s examine the @code block.

private Data.Image image;

In our Data project, we have the API model:

public class Image
{
    public string Title { get; set; }
    public string Copyright { get; set; }
    public DateTime Date { get; set; }
    public string Explanation { get; set; }
    public string Url { get; set; }
    public string HdUrl { get; set; }
}

More interestingly, we call OnInitializedAsync to fetch our image from the Azure Function:

protected override async Task OnInitializedAsync()
{
    image = await http.GetFromJsonAsync<Data.Image>("api/image");
}

Wait, what is that http reference? That’s us injecting HttpClient, from the top of the file:

@inject HttpClient http

Much like a normal .NET Core app, you can use dependency injection to inject a service into a Razor component.

The markup

Next, you’ll see us use Razor syntax to render image properties onto the page. You’ll notice I check first if the image is null before rendering. The page will certainly load before the API call completes. In those cases, the page can error out if you don’t check for it. (I also saw that sometimes the API didn’t return a Copyright, so I handled that, too.)

@if (image != null)
{
    <div class="p-4">
        <h1 class="text-6xl">@image.Title</h1>
        <p class="text-2xl">@FormatDate(image.Date)</p>
        @if (image.Copyright != null)
        {
            <p>Copyright: @image.Copyright</p>
        }
    </div>

    <div class="flex justify-center p-4">
        <img src="@image.Url" class="rounded-lg h-500 w-500 flex items-center justify-center"><br />
    </div>
}

What about CSS?

In the markup, you might be wondering about the HTML classes. With CSS for this project, Chris Sainty and Jon Hilton sold me on Tailwind CSS. Tailwind is what we call utility-first, allowing you to use a variety of classes that allows you to iterate easily. It takes a little bit to get used to, but it sure beats dropping in pre-built components that find their way everywhere (hi, Bootstrap).

I won’t be going too much in-depth on CSS, knowing my limitations, but that’s what the markup classes are for.

Run the project locally

I’d love for you to join me as we learn Blazor together. Once you clone the GitHub repo, please look at the project README.md to understand how you can get this application up and running.

Wrap up

I know this was a long “introduction” but I wanted a nice first post to explain everything. In this post, we introduced Blazor, talked about Blazor hosting options, and reviewed our code. Stay tuned and thanks for reading!

]]>
<![CDATA[ The .NET Stacks #22: .NET 5 RC 2 ships, .NET Foundation all hands, and links ]]> https://www.daveabrock.com/2020/10/24/dotnet-stacks-22/ 608c3e3df4327a003ba2fe68 Fri, 23 Oct 2020 19:00:00 -0500 I messaged a developer friend to see if he liked my puns, and he said “false.” I told him I didn’t appreciate the cyber boolean.

On tap this week:

  • .NET 5 RC 2 ships
  • .NET Foundation updates
  • Last week in the .NET world

🚢 .NET 5 RC 2 ships

On Tuesday, Microsoft announced the release of .NET 5.0 RC 2. With the general release of .NET 5 on November 10, it’s the last release candidate (“near-final,” as Richard Lander puts it). You can download it here (and will also need the latest Visual Studio preview on Windows or Mac).

The biggest news out of the general announcement post? MSI installers are now available for ARM64 (yes!)—however, note the .NET 5 SDK does not contain the Windows Desktop components on ARM64. As a bonus, I learned that ClickOnce is still a thing and available for .NET Core 3.1 and .NET 5 Windows apps.

ASP.NET Core updates for Blazor

For being so late in the release cycle, ASP.NET Core was able to ship quite a few Blazor updates for RC 2.

As I’ve written about extensively, Blazor now ships with CSS isolation—the ability to scope styles to only a single component. Previously, when the feature was released in .NET 5 Preview 8, all scoped files were bundled into a big scoped.styles.css file. If you were styling a lot of components, the bundle could get quite heavy. Now, Blazor produces one bundle for each referenced project or package with the format MyProject.styles.css. (The official Microsoft doc on CSS isolation, which I’m writing, should be live in the next week or two.)

In addition, RC 2 comes with a bunch of Blazor tooling updates. The pesky static port issue was resolved, you can step out and over async methods, and can debug lazy loaded assemblies. Also, your tools can now throw warnings when using unsupported Blazor Web Assembly APIs from a browser. (This is part of a larger .NET 5 effort that annotates which APIs are supported with a platform compatability analyzer.)

Entity Framework Core 5 updates

Because EF Core 5 RC1 was so feature-rich—with many-to-many, mapping entity types to queries, event counters, SaveChanges interception and much more, RC2 was spent on bug fixes. (Also, if Jeremy Likness’s callouts in every update haven’t gotten your attention, know that EF Core 5 requires .NET Standard 2.1, meaning it won’t run on .NET Standard 2.0 platforms like .NET Framework.)

Next week, I’ll talk about the .NET 5 release cycles and how Microsoft is supporting them.

🧭 .NET Foundation “all hands” meeting recap

The .NET Foundation provided a September/October update this week, and also hosted an “all hands meeting” with the Board of Directors (which was live-streamed and open to all).

In the past, the Foundation has heard loud and clear that they need to work on better communication and openness. They’ve responded with regular updates, releasing budget info, a State of the Union update, and more. There’s a proposal to change the mission statement (and you can chime in on the open GitHub issue until October 26):

Our mission is to build and educate producers and consumers, both new and old of the .NET platform. We will grow a trusted OSS ecosystem adopted by education, commercial entities, and all users. We will lead by example creating a world-wide, healthy, vibrant, and diverse OSS community.

There was some mention of the .NET Foundation maturity model, which is currently sitting on the back burner. This model helps to improve the quality and quantity of .NET open source projects (and .NET in general). A big driver of this model is to get corporate sponsors to pay for .NET Foundation projects and there’s some open work on seeing that through—how do you make the projects sustainable without lazily asking maintainers to keep working hard on them?

Rodney Littles II made a great point in saying: “There are people in the community that are asking a lot of the .NET Foundation … We are the .NET community, what are we going to do for ourselves? We have to get past this concept of a Microsoft-driven construct.”

It’s easy to get skeptical about the “Microsoft-driven construct” comment—Microsoft owns .NET and has a huge financial stake in seeing it do well, so hoping for complete independence isn’t realistic. As a result, when it comes to the .NET Foundation Microsoft is both an asset and a liability. But it’s important for the community to not view the Foundation as something that needs to be driven by Microsoft. Yes, there will always be questions on Microsoft’s independence, but at a certain point the growth of the Foundation depends on the community.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 ASP.NET Core / Blazor

⛅ The cloud

📔 C#

📗 F#

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ C# 10 First Look: Constant string interpolation ]]> https://www.daveabrock.com/2020/10/20/const-string-interpolation/ 608c3e3df4327a003ba2fe67 Mon, 19 Oct 2020 19:00:00 -0500 C# 9 is finally here, everybody.

This summer, we took a look at most of its core features. We looked at init-only features, records, pattern matching, top-level programs, and target typing and covariant returns.

Now, we can look forward to C# 10 (what is also referred to as C# Next). You probably think I’m crazy to be writing about C# 10 so early—and I am!—but there’s a feature slated for C# 10 that looks all set to go, constant string interpolation—a request that’s been hanging out on GitHub since 2015. Fred Silberberg from the Roslyn team says that it’s implemented in a feature branch and will likely be part of C# 10. As a matter of fact, if you head over to SharpLab, there’s a C# Next: Constant Interpolated Strings branch you can play with that’s been out there since … October 18, 2020.

While it’s madness to write about a new language feature so early, I don’t think it’s too crazy to talk about this specific feature. It’s super easy to implement so I don’t see that part changing too much—only when it gets shipped.

We’ll discuss the following content in this post.

Heads up! This should go without saying, but things may change between now and the C# 10 release date. In other words: 🤷‍♂️.

An absurdly quick example

These days—in C# 9 or earlier, that is—if you want to merge constant strings together, you will have to use concatenation, and not interpolation (or remove them as const variables altogether).

A common use is with paths. In current times, you have to do this if you are working with constant strings:

const string myRootPath = "/src/to/my/root";
const string myFilePath = myRootPath + "README.md";

With constant string interpolation, you can do this:

const string myRootPath = "/src/to/my/root";
const string myWholeFilePath = $"{myRootPath}/README.md";

Constant interpolated strings would be super convenient when working with attribute arguments. Let’s say you want to flag a method with the ObsoleteAttribute. Try this:

[Obsolete($"Ooh, don't use me. Instead, use {nameof(MyBetterType)}.")]

Why can’t I do that now?

When I learned about string interpolation back in C# 6, I was excited to think I’d never have to do concatenation again—so this is a bummer. Why can’t I do it, anyway?

If you try to use constant string interpolation in C# 9, you’ll get this error back:

error CS0133: The expression being assigned to 'myFilePath' must be constant

When you use string interpolation, the interpolated strings end up getting converted to string.Format calls. Once you understand that point, you’ll see it as:

const string myFilePath = string.Format("{0}/README.md", myRootPath);

While it may look like a constant at compile time, it isn’t because of the string.Format invocation. Of course, concatenation works fine since as there’s hardly any magic gluing two string literals together.

What about culture impacts?

While you may think that your string in question is just constants, you might be wrong and it might not be completely obvious. Even something as simple as outputting a float has impacts across cultures.

For example, here in the US I can do this:

Console.WriteLine($"{1200.12}") // output: 1200.12;

Try a different culture, and the results are different (and, to our point, not constant):

CultureInfo.CurrentCulture = new CultureInfo("es-ES", false);
Console.WriteLine($"{1212.12}"); // output: 1200,12

When it comes to constant string interpolation, will there be unintentional side effects when working with culture dependencies? The team insists that won’t be an issue:

This feature won’t work in this scenarios. It’ll only work for interpolating other constant strings, so locale is not a concern. It’s really just a slightly different syntax for string concatenation.

Wrap up

In this post, we looked at constant string interpolation—a promising addition to C# 10. We discussed how to use it, the reasons why you can’t use it now, and addressed culture impacts.

If you’re interested in what is currently slated for the next release of C#, head over to the Language Feature Status page in the Roslyn GitHub repo.

]]>
<![CDATA[ Improve rendering performance with Blazor component virtualization ]]> https://www.daveabrock.com/2020/10/20/blazor-component-virtualization/ 608c3e3df4327a003ba2fe66 Mon, 19 Oct 2020 19:00:00 -0500 When measuring web performance—especially on page load—it’s not always about a consistent metric, such as how quickly the entire HTML tree loads from the server. It’s helpful to think in terms of perceived performance—do you understand what needs to be rendered now, and what can be rendered later? End users should never have to wait for something that, well, can wait.

The ASP.NET Core team recently rolled out Blazor component virtualization, a technique for limiting UI rendering to the visible page elements only. You can easily leverage this through a built-in Virtualize component.

Here’s a common scenario: let’s say you have a requirement to list a bunch of items on a table, and you might be stuck with a lot of data. If you’re listing several thousand items on a page, users often have to sit and wait for the entire site to load. With Blazor component virtualization, the app will load only the records in the user’s window, then render more only when scrolling.

This post will discuss the following content.

Heads up! This post assumes you have some familiarity with Blazor, like how to create and render a basic component.

Prerequisites

To work with Blazor component virtualization, you need .NET 5 RC1 or greater. Head on over to the .NET 5 SDK downloads, or have Visual Studio 2019 Preview 3 (v16.8) or greater installed.

The “up and running in 30 seconds” solution

Assume you have a collection called people that’s a list of Person objects with properties like FirstName, LastName, and so on. In your component’s .razor file, replace your traditional for-each loop…

@foreach (var person in people)
{
    <p>
        @person.FirstName @person.LastName is only fun on Fridays.
    </p>
}

… with the Virtualize component, and pass the collection into the Items parameter and use context to access your object’s properties:

<Virtualize Items="@people">
    <p>
        @context.FirstName @context.LastName is only fun on Fridays.
    </p>
</Virtualize>

Additionally, you could explicitly specify a context in your component:

<Virtualize Context="person" Items="@people">
    <p>
        @person.FirstName @person.LastName is only fun on Fridays.
    </p>
</Virtualize>

The component does the hard work of getting the height of your container and the size of the items to render. When we speak of the “container” we are talking about the rendered element: it can be one or more Razor components, a mix of HTML and Razor components, or plain old Razor code.

That’s really how easy it is, and will cover a majority of your use cases. Keep reading to understand the Virtualize component in greater detail—and how you can customize and extend it for your specific needs.

Our sample app

To further illustrate the need for Blazor component virtualization, let’s kick things up a notch. In the sample Blazor app’s FetchData component (or any component you’d like), let’s make a bunch of objects in memory when the page loads (in OnInitializedAsync). In the @code block of the component, we’ll populate a list of 10,000 cars to display on the page. (To state the obvious, we should never actually do this.)

private List<Car> cars;

protected override async Task OnInitializedAsync()
{
  cars = await MakeTenThousandCars();
}

private async Task<List<Car>> MakeTenThousandCars()
{
  List<Car> myCarList = new List<Car>();

  for (int i = 0; i < 10000; i++)
  {
    var car = new Car()
    {
      Id = Guid.NewGuid(),
      Name = $"Car {i}",
      Cost = i * 100
    };

    myCarList.Add(car);
  }  
  return await Task.FromResult(myCarList);
}

public class Car
{
  public Guid Id { get; set; }
  public string Name { get; set; }
  public int Cost { get; set; }
}

Then, in the markup: when we get all our cars, we’ll lay them out in a single table.

@if (cars == null)
{
  <p><em>Loading so many cars...</em></p>
}
else
{
  <table class="table">
    <thead>
      <tr>
        <th>Id</th>
        <th>Name</th>
        <th>Cost</th>
      </tr>
    </thead>
    <tbody>
      @foreach (var car in cars)
      {
        <tr>
            <td>@car.Id</td>
            <td>@car.Name</td>
            <td>@car.Cost</td>
        </tr>
       }
    </tbody>
  </table>
}

Launch your app, fire up your favorite dev tools, and head over to the Fetch data link at http://localhost:xxxx/fetchdata.

According to my dev tools with the cache disabled, load ranges anywhere from 2.5 to 4 seconds (I refreshed ten times) and quite a bit of lag.

As before, I can do is replace my for-loop with the following, then re-launch my application.

<Virtualize Items="cars" Context="car">
  <tr>
    <td>@car.Id</td>
    <td>@car.Name</td>
    <td>@car.Cost</td>
  </tr>
</Virtualize>

Now, we’re looking at 1.2 to 1.9 seconds uncached, about twice the speed.

In the following video, keep an eye on Chrome Developer tools. You’ll see how the elements change as I scroll.

The Items and Context are the most common parameters to use here, but there’s plenty more you can utilize.

Additional parameters

We’ll talk through four additional parameters: OverscanCount, ItemsDelegate, Placeholder, and ItemSize.

OverscanCount

You can also specify an OverscanCount parameter, which specifies how many more items to render before and after the viewable container. The default is currently three items (src). You may want to tweak this to prevent excessive rendering when you know there will be a lot of scrolling.

Here’s how we would do it in our first example:

<Virtualize Items="@cars" Context="car" OverscanCount="5">
  <tr>
    <td>@car.Id</td>
    <td>@car.Name</td>
    <td>@car.Cost</td>
  </tr>
</Virtualize>

Obviously, the higher the number the more elements you’ll render—so use this cautiously.

Lazy loading with ItemsProvider and Placeholder

To the casual observer, it might seem like the rendering fetches data periodically. In reality all data is queued up in memory by default. If you don’t want to do this, you can work with an item provider delegate method—in C#, a delegate is a type that refers to methods with a parameter list and return type.

For example, you might be calling an external API (or any other service) and not always know how much data you’re getting back. With the ItemsProvider parameter, you can ask for requested items on demand.

The provider asks for an ItemsProviderRequest, which contains a StartIndex (where to start) and  Count (how many items to provide), and a CancellationToken. After fetching the data, the data returns an ItemsProviderResult<TItem> along with a total item count.

Let’s add this method to our @code block in our Razor file:

private async ValueTask<ItemsProviderResult<Car>> LoadCars(ItemsProviderRequest request)
{
  var cars = await MakeTenThousandCars();
  return new ItemsProviderResult<Car>(cars.Skip(request.StartIndex).Take(request.Count), cars.Count());
}

If you aren’t familiar with the LINQ syntax:

  • The Skip operator skips over elements until we get to the StartIndex and return the remainder
  • The Take operator takes the next x elements from what Skip returned, where x is the Count to return

Before we update the Razor code, let’s talk about Placeholder.

Typically with Blazor components that run on initialization, you’ll see this pattern to display a message while the collection is populating.

@if (cars == null)
{
  <p><em>Loading so many cars...</em></p>
}

In our case, you can remove that block and replace it with a Placeholder. Here’s the updated code.

<Virtualize Context="car" ItemsProvider="@LoadCars">
    <ItemContent>
        <tr>
            <td>@car.Id</td>
            <td>@car.Name</td>
            <td>@car.Cost</td>
        </tr>
    </ItemContent>
    <Placeholder>
        <p>Loading so many cars...</p>
    </Placeholder>
</Virtualize>

ItemSize

You can also specify the size of each item, in pixels. The default is 50px (src).

Here’s how we’d do it in our example:

<Virtualize Items="@cars" Context="car" ItemSize="15">
  <tr>
    <td>@car.Id</td>
    <td>@car.Name</td>
    <td>@car.Cost</td>
  </tr>
</Virtualize>

Reminder: this isn’t a catch-all

Considering how easy it is to use Virtualize it might be easy to use it as a catch-all: I have to load a bunch of stuff, so I’ll drop it here. It’s important to use this component thoughtfully. For example, all items must be a known height so that Blazor can calculate the total scroll range and, therefore, what to render.

For example, you might have elements that wrap unexpectedly when using different window sizes. In these cases, the virtualization logic won’t work as you expect (there’s some chatter about how to handle this).

As with anything else in your codebase, it’s a story of tradeoffs. Understand them before you implement.

Wrap up

In this post, we talked about the Virtualize component and how it can help you. We worked through a quick and dirty example, then worked through a list with a lot of records. Then, we talked about other parameters available to you, such as OverscanCount and ItemSize. We then discussed how to perform lazy loading with the ItemsDelegate and Placeholder parameters.

References

]]>
<![CDATA[ The .NET Stacks #21: Azure Static Web Apps, .NET 6 feedback, and more! ]]> https://www.daveabrock.com/2020/10/16/dotnet-stacks-21/ 608c3e3df4327a003ba2fe65 Thu, 15 Oct 2020 19:00:00 -0500 Happy Monday! It looks like my pun from last week received some attention (and some new subscribers). I’m glad you all joined the newsletter before it gets code outside.

This week:

  • Catch up on Azure Static Web Apps
  • Provide input on .NET 6
  • Last week in the .NET world

⛅ Catch up on Azure Static Web Apps

This week, Anthony Chu joined the ASP.NET community standup to talk about Azure Static Web Apps. Azure Static Web Apps has been at the top of my “I really need to take a look at this” list, so the timing was right. 😎

Static sites are definitely the new hotness, and have been for awhile. If content on your site doesn’t change that often, you’ll just need to serve up some static HTML files. For example, on my site I utilize a static site generator called Jekyll and host the content on GitHub Pages—this helps my site load super fast and with little overhead. Why introduce database overhead and the like if you don’t need it? (If it wasn’t clear I’m talking about you, WordPress.)

Many times, though, you’ll also need to call off to an API eventually—this is quite common with SPAs like Vue, Angular, React, and now Blazor: you have a super-lightweight front-end that calls off to some APIs. A common architecture is serving up files statically with a serverless backend, such as Azure Functions.

Enter Azure Static Web Apps. Introduced at Ignite this year, Azure Static Web Apps allow you to leverage this architecture with one easy solution. If you’re good with GitHub lock-in and are looking for a static hosting solution, it’s worth a look.

You can check out the docs for the full treatment, but Azure Static Web Apps offers web hosting (duh), native Azure Functions support, GitHub triggers over GitHub Actions, free renewable SSL certificates, custom domains, and a bunch of auth integrations, and fallback routes.

My favorite feature is GitHub PR testing sites. Once a PR kicks off, a GitHub Action executes and creates a temporary test staging site to view changes (it goes away once the PR is merged or discarded). This is a wonderful tool for you and others to test any changes to your app. Are you working on a complicated PR and want to have testers put your app through its paces? Send them the staging link.

It’s nice to have all this integrated in Azure, especially if that’s where you do a lot of business. But why not just use GitHub Pages if you don’t have a lot of complexity? That’s a fair question. With the just announced Blazor Web Assembly support, Azure Static Web Apps is the clear winner. With Azure Static Web Apps, the GitHub Actions step becomes aware of a Blazor WebAssembly app and can do Blazor-specific precompression steps. Otherwise, you’d have to integrate an additional workflow to your app. With Azure Static Web Apps, it’s available out of the box.

👨‍💻 Provide input on .NET 6

With the general release of .NET 5 not even a month away, Microsoft is setting its sights on .NET 6.

Check out this GitHub issue to provide feedback on what features you want to see. Unsurprisingly, Blazor AoT compilation is leading the charge. As a whole, the ASP.NET folks have noted that speeding up the developer feedback loop is a big priority for .NET 6.

🎂 Happy birthday, Mom

Before we get into the links: I quickly wanted to wish my mom a very Happy Birthday, as she turns … 30? I love you, Mom. And to think I would never get you back for all the embarrassing moments.

Here’s a picture of us when I was young. I think she’s trying to convince me to use Kubernetes.

Picture with mom

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 ASP.NET Core / Blazor

⛅ The cloud

📔 C#

📗 F#

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Blast Off with Blazor, Azure Functions, and Azure Static Web Apps ]]> https://www.daveabrock.com/2020/10/13/azure-functions-static-apps-blazor/ 608c3e3df4327a003ba2fe64 Mon, 12 Oct 2020 19:00:00 -0500 Static sites are so great. After all, you’re reading these words on a static site. Why bother with the overhead of dynamically generated files if you don’t need them? It’s not that static sites are boring—just that its served files, like HTML, aren’t generated dynamically. With less to do, these sites perform better and are cheaper to run.

This works great for blogs like this one, but in many cases, you’ll still need a server-side aspect. You’ll want to talk to some APIs and handle logic and whatnot. Traditionally, this would involve some storage providers like Azure Storage or Amazon S3. With the advent of serverless technologies like Azure Functions, we have even more to think about.

Now, with Azure Static Web Apps, we’ve got the best of both worlds: a light front-end paired with a serverless API in one pretty package. (The API is optional if you’re wondering.) With the recently announced Blazor support, I had to give it a shot.

In this post, we’ll cover the following topics:

Prerequisites

To work with this post, you’ll need:

While .NET 5 is weeks from going live at the time of this post, you aren’t able to deploy .NET 5 code to Azure Static Web Apps yet.

Our application

Before you create an Azure Static Web App instance, you’ll need an application hosted in GitHub—when you create the instance in Azure, it’ll ask for a repo and its details.

I wrote a quick app called Blast Off with Blazor (and you can see it live at blastoffwithblazor.com). It’s an admittedly simple app at this time, and I’ll be adding to it over the next few months as I dig deep on Blazor concepts and best practices. Feel free to clone the GitHub repo (and read the instructions in the README.md file for details on how to run it locally).

The app is served over Blazor Web Assembly. When the app loads, it calls an Azure Function. The Function, in turn, calls NASA’s Astronomy Picture of the Day (APOD) API to get a picture that is out of this world—literally. The APOD site has been serving up amazing images daily since June 16, 1995. Every time you reload the app, I’ll fetch a random image anytime between that start date and today.

In the Index.razor file, I’m doing that in the @code block in the OnInitializedAsync() function (full code):

@code {
    private Data.Image image;

    protected override async Task OnInitializedAsync()
    {
        image = await http.GetFromJsonAsync<Data.Image>("api/image");
    }
}

In my Azure Function, in the api/image path, I call off to the APOD API (full code):

[FunctionName("ImageGet")]
public static async Task<IActionResult> Run(
      [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "image")] HttpRequest req,
      ILogger log)
{
      var apiKey = Environment.GetEnvironmentVariable("ApiKey");
      var response = await httpClient.GetAsync($"https://api.nasa.gov/planetary/apod?api_key={apiKey}&hd=true&date={GetRandomDate()}");
      var result = await response.Content.ReadAsStringAsync();
      return new OkObjectResult(JsonConvert.DeserializeObject(result));
}

Once I get a response back, I access the properties of the returned object to display the image, title, date, and copyright on the page. It’s not rocket science (well, it kinda is), but it’ll do.

I won’t go to the moon and back on Blazor and Azure Functions in this post, as I’m focused on Azure Static Web Apps. In future posts, I’ll discuss each in greater detail.

Here’s the application in action. Puts stars in your eyes, doesn’t it?

The Blast Off with Blazor app

Before we blast off, let’s explore some app configuration.

Fallback route support

In single-page applications, fallback routes are important. Let’s assume I have a special path called /nasa. When I access the page, it’ll display the path, including /nasa in my browser, but if I refresh the page manually, by default my app won’t reload with /nasa. Instead, a 404 error can display because there isn’t a /nasa page on the “server” (or, in our case, there isn’t a server at all).

Luckily, Azure Static Web Apps supports fallback routing. Drop a staticwebapp.config.json file in the root of your static assets folder (typically wwwroot) like so:

{
  "routes": [
    {
      "route": "/*",
      "serve": "/index.html",
      "statusCode": 200
    }
  ]
}

The route key makes sure to match all routes.

CORS support

Because Azure Static Web Apps is configured with Azure Functions, you don’t have to deal with Cross-Origin Resource Sharing (CORS) issues—in short, this is when your browser blocks your request unless the API server allows it.

However, you’ll need to do this in your local environment. In the Properties folder of your Azure Functions project, create a launchSettings.json file with this:

{
     "profiles": {
         "Api": {
             "commandName": "Project",
             "commandLineArgs": "start --cors *"
         }
     }
 }

I’m using Visual Studio tooling. If you instead want to use Azure Functions from the command line, you’ll need to do this inside a local.settings.json file.

With the application ready to go, let’s head on over to the Azure Portal at portal.azure.com to create our Azure Static Web App.

Create an Azure Static Web App

Once you’re at the Azure Portal, search for static until you see Static Web Apps (Preview) show up—click that, and create a new instance.

From there, do the following:

  1. Select an existing subscription, and resource group (or create a new one).
  2. Enter a suitable name and region for your app. You’ll see the only current SKU is the free one.
  3. Click the Sign in with GitHub to, well, sign in with your GitHub credentials and give Azure Static Web Apps rights to act on your behalf.
  4. Select your account, the repository where your code resides, and the branch.
  5. After you select those, a Build Details section displays. Select the type of app (like Blazor). When workflows trigger, they can use these presets to run steps based on app type—all out of the box.
  6. Important: select the App location, Api location, and App artifact location, relative to your repo in GitHub.
  • App location - in my case, the folder that contains my Blazor project file. The default location is Client.
  • Api location - select where your Azure Function project resides. The default location is Api.
  • App artifact location - the location of your deployed static files. The default is wwwroot.

Finally, click Review + Create, then Create. It should take just a few seconds to deploy your instance.

Your completed form should look similar to this:

The Blast Off with Blazor app

After the deployment completes, click Go to resource. You’ll see a bunch of links to your new URL, deployment history, and a workflow file. Before your site is ready, the workflow file executes right away (it resides in the .github/workflows directory of your repo).

Inspect the workflow file

The workflow file is a YAML file that instructs Azure Static Web Apps how to build your site. Here’s mine:

name: Azure Static Web Apps CI/CD

on:
  push:
    branches:
      - main
  pull_request:
    types: [opened, synchronize, reopened, closed]
    branches:
      - main

jobs:
  build_and_deploy_job:
    if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed')
    runs-on: ubuntu-latest
    name: Build and Deploy Job
    steps:
      - uses: actions/checkout@v2
        with:
          submodules: true
      - name: Build And Deploy
        id: builddeploy
        uses: Azure/static-web-apps-deploy@v0.0.1-preview
        with:
          azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_JOLLY_GROUND_0BDB93D10 }}
          repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
          action: "upload"
          ###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
          # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig
          app_location: "Client" # App source code path
          api_location: "Api" # Api source code path - optional
          app_artifact_location: "wwwroot" # Built app content directory - optional
          ###### End of Repository/Build Configurations ######

  close_pull_request_job:
    if: github.event_name == 'pull_request' && github.event.action == 'closed'
    runs-on: ubuntu-latest
    name: Close Pull Request Job
    steps:
      - name: Close Pull Request
        id: closepullrequest
        uses: Azure/static-web-apps-deploy@v0.0.1-preview
        with:
          azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_JOLLY_GROUND_0BDB93D10 }}
          action: "close"

This file looks daunting but as you look through it, it’s actually pretty simple. If there’s a direct push, or a pull request is created or is changed in my main branch, their Oryx build system—which automatically compiles source code into runnable artifacts—detects and builds our app, and uploads its content (including Azure Functions) to Azure.

Of course, you can modify this file to your liking.

You can head over to the Actions section of your GitHub repository to monitor the status of your workflow and view any logging.

A gotcha about package.json

I’d love to tell you the build succeeded on the first try, but it kept spacing out:

The Blast Off with Blazor app

In my project, I use Gulp to build and deploy Tailwind CSS styles (thanks, Chris Sainty). By accident, I deployed my package.json file to my repo. Because of that, Oryx detected a Node.js install, and therefore was looking for a build step in package.json. As a result, my build failed.

This was a silly mistake on my end, but I can definitely see other scenarios where this could be an issue. Azure Static Web Apps is in preview, so they are ironing out some things, so this might get addressed—but now you know. If you want to push your package.json you would need to edit your workflow file to change build order, or a new app_build_command parameter.

Pull request sites

This is my favorite thing about Azure Static Web Apps: when you create a pull request against your main branch, GitHub Actions creates a temporary URL for you to view changes. (When the PR is merged and closed, the workflow runs and deletes your temporary environment.)

For example, I created a PR to fix a minor bug. Once I click Create pull request in GitHub, a workflow will run and the PR will update with a link to the temporary environment:

The Blast Off with Blazor app

Additionally, if you hit up the Environments link in the Azure Portal, it’ll display everything nicely:

The Blast Off with Blazor app

A note on custom domains

Azure Static Web Apps supports custom domains. Sort of. I can have https://www.blastoffwithblazor.com but not https://blastoffwithblazor.com. That’s right: root domain support is not included.

Luckily, Burke Holland has you covered. I understand this impact of this change reaches greater than Azure Static Web Apps, so it might take some time to support.

Wrap up

This was … a blast. I hope it eclipsed your expectations. In this post, we discussed the value of Azure Static Web Apps. We walked through our out-of-this-world sample app, created an Azure Static Web app solution, explored the workflow file, explained some gotchas, and explored how pull request sites work.

If you’re big on Blazor stay tuned—the app will be getting some enhancements as we learn more about Blazor together!

Of course, if you have any questions comet me on Twitter.

References

]]>
<![CDATA[ How to Docker with .NET: Getting Started ]]> https://www.daveabrock.com/2020/10/10/docker-aspnet-core-intro/ 608c3e3df4327a003ba2fe63 Fri, 09 Oct 2020 19:00:00 -0500 As a developer, do you ever think about all the time you spend … not developing? Among the work we do to get things working—I like to call it meta-work—infrastructure is the most frustrating and the biggest headache. It takes so much of our time and impacts our ability to deliver value quickly.

Thanks to software containerization—and, specifically, Docker—we can improve our workflow significantly. We can think of containers as isolated environments that allow us to run our software quickly. These containers allow us to package running images that include all our code and dependencies on any environment.

To be clear, I am tremendously late to this party. (Like, dinner is over and people are starting to ask “where’s Dave?” as if I got in an accident on the way there.) While I’ve always known about Docker from a high level—the value it brings, how to get an image, and so on—my new job responsibilities have forced me to understand it on a tremendously deeper level.

While I’ve seen a lot of great posts on this topic (such as Steve Gordon’s wonderful series from 2017), I wanted to write about all I’m learning—and hopefully it can help you, too.

I’ll be walking through Docker containers in three posts:

  • This post: Getting started (core concepts, run a pre-built image)
  • Next post: Diving deep (create image from existing app, considerations)
  • Last post: Publishing to Azure Kubernetes Service (AKS)

Armed with this knowledge, I’ll then be writing about container orchestration using Kubernetes.

This post covers the following topics.

Ensure Docker and Git is installed

Before starting this post, make sure you have the following installed:

Confirm your Docker install by executing docker -v in your terminal.

How Docker works

When we talk about packaging our software, we’re not just talking about code (if only it was that simple). There’s our operating system, the code itself, packages, libraries, binaries, and more. For example, you might be developing a .NET Core app on Linux with a database, while using a reverse proxy configuration. That sentence alone should be rationale enough for standardized environments.

When we’re running an image, that’s our container. If an image is our blueprint, the container is an instance of it running in memory. These images are immutable: you can’t change them once they’re built. This guarantees isolation, consistency, and performance.

What is a host?

A host is the OS where Docker is run. If you’re running Linux containers like most of the world, Docker shares the host kernel if the binary can access the kernel directly—preventing a need for a container OS.

Windows containers do need a container OS that is part of the packaged image, however, for file system access and memory management. Windows containers come in handy (and are essential) if you’re running things dependent on Windows APIs, like legacy .NET Framework apps. If you don’t have those needs, Linux containers are almost always the way to go.

Base images and parent images

To Docker, a base image utilizes their minimal image, called scratch. It’s an empty container image. It’s the equivalent of a piece of white paper: you’ve got something to draw with, but you’ll need to do the sketching.

Typically, you’ll instead build from a parent image, an image you can use as a starting point for your images—in developer language, a base class. Instead of using a base image to install an OS and a runtime, you can use a parent image instead.

Much like the IaaS vs. PaaS arguments in the cloud space, it comes down to control and how dirty you want your hands to get.

Using a Dockerfile

Docker uses a Dockerfile, a file that contains instructions on how to build an image. In a Dockerfile you’ll specify the image to use, commands to run, artifacts (your app and its dependencies), exposed ports, and what to run when the container starts running.

We’ll get into much more detail in the next post, but here’s a sample Dockerfile:

# Step 1: Specify your base image
FROM mcr.microsoft.com/dotnet/core/sdk:3.1

# Step 2: Copy project file to the /src container folder
WORKDIR /src
COPY ["MyCoolApp/MyCoolApp.csproj", "MyCoolApp/"]

# Step 3: Run a restore to download dependencies
RUN dotnet restore "MyCoolApp/MyCoolApp.csproj"

# Step 4: Copy app code to the container
COPY . .
WORKDIR "/src/MyCoolApp"

# Step 5: Build the app in Release mode
RUN dotnet build "MyCoolApp.csproj" -c Release -o /app

# Step 6: Publish the application
RUN dotnet publish "MyCoolApp.csproj" -c Release -o /app

# Step 7: Expose port 80 in the container
EXPOSE 80

# Step 8: Move the /app folder and run the compiled app
WORKDIR /app
ENTRYPOINT ["dotnet", "MyCoolApp.dll"]

With this Dockerfile, you’ll then be able to run something like docker build -t mycoolapp . to build your image.

A note on ports

Pay special attention to Step 7, where we open a port on the Docker container. You might think a common port, 80, should be open by default. It is not. By default, Docker doesn’t allow inbound requests to your container. You will need to explicitly tell Docker.

If you fetch and run a pre-built image from Docker Hub using docker run and get errors, you need to run it with the -p flag. A common use is -p 8000:80, which maps port 80 in your container to port 8000 on your local machine.

Armed with this knowledge, we’re ready to get our feet wet by running a pre-built ASP.NET Core image. (In the next post, we’ll Docker-ize an app we wrote.)

Fetch, build, and run a prebuilt Docker image

To access existing Docker images, they must be hosted in a container registry. The most common public registry is Docker Hub. When I go to the Docker Hub and search for .net core, you’ll see quite a few repositories hosted by Microsoft.

Docker Hub

We’re going to fetch an existing ASP.NET Core sample app. Head on over to the samples repo to look at the instructions.

The first thing you see is a sample command to docker pull the image. That’ll work just fine, but when we run the image, Docker will pull it for us automatically if it doesn’t exist.

To run the sample app based on a pre-built image, execute the following from your terminal:

docker run -p 8000:80 --name my_sample mcr.microsoft.com/dotnet/core/samples:aspnetapp

This command both fetches and runs our container, which is stored at mcr.microsoft.com/dotnet/core/samples, has a tag of aspnetapp (tags are text labels that assist with versioning), and uses the -p flag to map port 80 on the Docker host to port 8000 on your local machine.

If you received an error because of a port conflict, you’ll need to select a port that you aren’t currently using.

Your output should be similar to the following:

Unable to find image 'mcr.microsoft.com/dotnet/core/samples:aspnetapp' locally
aspnetapp: Pulling from dotnet/core/samples
fa2fa425bc39: Already exists
9fb46f3edd09: Pull complete
39a135016f79: Pull complete
4f3a5a9bf323: Pull complete
67b18db7ec89: Pull complete
53bfb409e62d: Pull complete
e223eece4c87: Pull complete
85af4a87bdd5: Pull complete
d445ffa5a90f: Pull complete
1fd0bebc24e8: Pull complete
Digest: sha256:d7e0b7f7380d07d2ab32f4082a194e4bc8ac4374cd8bce9c7495f980cf59804f
Status: Downloaded newer image for mcr.microsoft.com/dotnet/core/samples:aspnetapp
warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
      Storing keys in a directory 'C:\Users\ContainerUser\AppData\Local\ASP.NET\DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: C:\app

To take a look at the state of your running images, run the docker ps command. (Alternatively, you can run docker ps -a to see the status of all running and stopped containers—check out the docs for all your options.)

The terminal responds with a bunch of great information (too much for a readable screenshot) with your container ID, image registry location, command used to run it, when it was created, the status, and the ports its using.

Now, launch your favorite browser and enter http://localhost:8000 (or whatever port you specified):

A view of our pre-built image

See how easy that was? Now that is the value of containerization—going out, fetching an image, and not having to worry about all the ridiculous infrastructure headaches.

When we run docker image list we can see a listing of our installed images, and our new one is right at the top:

REPOSITORY                       TAG      IMAGE ID            CREATED             SIZE
mcr.microsoft.com/dotnet/aspnet  5.0      c946cb101055        3 weeks ago         352MB

You’ll notice that the app is served over HTTP by default (and good catch, by the way).

Serve image over HTTPS

When we’re playing around locally, HTTP is fine. But if we want to mimic real-world scenarios—and with containerization, the whole point is predictable environments!—we should get in the habit of running our containers over HTTPS. The instructions are slightly different between Windows and macOS/Linux (and with the assumption you are using a pre-built Linux container).

In either platform, the behavior is the same: self-signed development certificates are provisioned for your container over localhost. Then, you’ll be able to access your app from:

  • http://localhost:8000
  • https://localhost:8001

Windows

From your Windows terminal, execute the following commands (and replace mypassword with something more memorable):

dotnet dev-certs https -ep %USERPROFILE%\.aspnet\https\aspnetapp.pfx -p mypassword
dotnet dev-certs https --trust

Then, execute docker run doing the following (changing mypassword accordingly):

docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="mypassword" -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx -v %USERPROFILE%\.aspnet\https:/https/ mcr.microsoft.com/dotnet/core/samples:aspnetapp

MacOS/Linux

From your Mac/Linux terminal, execute the following commands (and replace mypassword with something more memorable):

dotnet dev-certs https -ep ${HOME}/.aspnet/https/aspnetapp.pfx -p mypassword
dotnet dev-certs https --trust

Then, execute the following (changing mypassword accordingly):

docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="mypassword" -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx -v ${HOME}/.aspnet/https:/https/ mcr.microsoft.com/dotnet/core/samples:aspnetapp

Check out the Microsoft documentation for the complete details.

Image cleanup

If you’re done playing with your image, you can get rid of it. To do so, we will stop and then remove the image (if it’s stopped, you can always skip to the last part).

Before we delete our container, we’ll stop it first. To stop our container, run the following command:

docker stop my_sample

If successful, the terminal will respond with its best “I am Groot” impression:

my_sample

You can now remove the container with the following command:

docker rm my_sample

Yes, we could have run docker container rm -f my_sample to do it all at once. I wouldn’t recommend this forceful approach, as it does not perform a graceful shutdown.

You just removed the containers you created. If you also want to delete the images they are built from, use the docker image rm command, like this (all containers using this image must be removed first):

docker image rm mcr.microsoft.com/dotnet/core/samples:aspnetapp

Wrap up

In this post, we got our feet wet with Docker. We discussed its value and core concepts such as hosts, base and parent images, and using a Dockerfile. Then, we fetched, built, and ran a pre-built ASP.NET Core container image. We served the container over HTTPs, and then discarded the container and its image.

Stay tuned for the next post as we use this knowledge to Dockerize our existing applications.

]]>
<![CDATA[ The .NET Stacks #20: Route to Code, IdentityServer, community links ]]> https://www.daveabrock.com/2020/10/09/dotnet-stacks-20/ 608c3e3df4327a003ba2fe62 Thu, 08 Oct 2020 19:00:00 -0500 Based on my eating habits this week, I should probably rename this newsletter to The .NET Snacks.

This week:

  • Use “route to code” with ASP.NET Core
  • The IdentityServer project goes commercial
  • Understand Blazor Web Assembly performance best practices
  • Last week in the .NET world

Use “route to code” with ASP.NET Core

I’ve been thinking a lot about “route to code” in ASP.NET Core lately. There isn’t much out there but have found some good content: the ON.NET Show brought up the concept recently, and Anthony Giretti has been writing some nice posts about it lately, too.

As an ASP.NET developer, you’ve likely leveraged controllers in MVC or Web API apps. That is, the controller will intercept your HTTP request and handle routing for you—you can do some configuration, but it’s largely managed for you. This is great if you don’t need much control, but it sure adds a lot of ceremony (if you’ve ever wait for ASP.NET to scaffold a new MVC app, you know what I mean). You may have scenarios where you need fine-grained control, simplicity, or high performance without an explicit framework.

This “route to code” concept offers a solution somewhere between ASP.NET Core middleware and controllers—where you can handle routing right in the Startup.cs file of your ASP.NET Core project. You can get started by creating an “Empty” ASP.NET Core web app in Visual Studio.

If we look at Anthony’s example, he creates a list of countries and instantiates it in the Startup constructor. The fun stuff, though, is in the app.UseEndpoints middleware.

Creating endpoints

Here, we’re using the MapGet extension method—you use it to match the HTTP/URL method, and then execute by running a delegate (and yes, there are other methods for the other HTTP verbs). You can definitely use this in more complex ways—like using string interpolation to create routing templates, adding authentication, and dependency injection. It takes some getting used to after years of depending on controllers—but it’s a great way to cut straight to what matters.

The IdentityServer project goes commercial

This week, it was announced that IdentityServer—an open-source OpenID Connect (OIDC) and OAuth 2.0 framework for ASP.NET and ASP.NET Core—has gone commercial. With 12 million NuGet downloads to date for the IdentityServer4 package, this is a big deal. For most of us, we’ve used the free (for us) library for the last decade. While congratulations are in order for Brock Allen and Dominick Baier—and they should be praised for finding a path for sustaining the project over the long term—a logical next question is what this means for the greater .NET ecosystem.

From Dominick Baier’s post on the news:

The current version (IdentityServer4 v4.x) will be the last version we work on as free open source. We will keep supporting IdentityServer4 until the end of life of .NET Core 3.1 in November 2022.
To continue our work, we have formed a new company Duende Software, and IdentityServer4 will be rebranded as Duende IdentityServer. Duende IdentityServer will contain all new feature work and will target .NET Core 3.1 and .NET 5 (and all versions beyond).

What does this mean for use in your projects? IdentityServer4 is still free, and appears to always will be. However, in two years it won’t be supported and you won’t get any critical security updates for it. IdentityServer5 will have this new pricing model. (And short term, .NET will ship with IdentityServer 4 templates.)

As for that pricing model: this new Duende IdentityServer company will offer two versions of IdentityServer. You’ll have a free reciprocal public license (RPL) for folks using open-source work, and a commercial license that is paid (the exact charges based on company size and usage). I’m seeing the $1500 price point being passed around, but others have noted it isn’t so simple. You’ll also want to check out the lively GitHub issue that discusses where to go from here.

If you don’t want to pay for it, fine—you’ve got two years on IdentityServer4 and you can evaluate if less complex solutions like the out-of-the-box Microsoft.AspNetCore.Identity work better for you. If not, you can roll your own solution. To that I say: unless you’re a security expert you’ll probably find the cost in development time and headaches far exceeds licensing for IdentityServer (for most cases). If your company needs the complex use cases and can pay for it, my opinion is to do just that.

Understand Blazor Web Assembly performance best practices

This week, Steve Sanderson—the architect at Microsoft behind Blazor—announced he’s working on documenting Blazor Web Assembly performance best practices. While performance is always important, it appears doubly so when you’re loading a complete .NET runtime into the browser. I definitely learned a lot, and it’s worth a read.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 ASP.NET Core / Blazor

⛅ The cloud

📔 C#

📗 F#

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #19: An Ignite recap and F# with Phillip Carter ]]> https://www.daveabrock.com/2020/10/03/dotnet-stacks-19/ 608c3e3df4327a003ba2fe61 Fri, 02 Oct 2020 19:00:00 -0500 This week, we’ll review Ignite 2020, talk with Phillip Carter about F# from the Microsoft perspective, and check out what’s going on around the community.

Ignite 2020 is a wrap

Build 2020 seems like so long ago, wasn’t it? I covered it in our very first issue four months ago. This week, Microsoft put on Ignite 2020—virtually, of course. If you’ve followed .NET over the last few months, you didn’t see a lot of surprises from the developer perspective—but there’s still some good stuff to go through.

Let’s see: .NET 5 RC is now available (but as a loyal reader, you knew that), Azure Communication Services is now in preview (with developers asking what that means for Twilio), Visual Studio 2019 support for GitHub Codespaces is now in beta, there’s new capabilities for Azure Logic Apps, and Azure Cosmos DB now has a serverless option. There’s more to see, depending on your focus—so check out the Ignite 2020 Book of News for the full treatment.

You can check out the sessions on-demand now. Here’s some that should be popular with .NET developers:

In case it isn’t completely obvious, these events are sales pitches for Azure—and the long game for developer productivity beats AWS and GCP by a country mile.

Would you like to get started in minutes without worrying about device specs or mind-numbing setup? Try GitHub Codespaces, a containerized environment in the cloud. Would you like to deploy your repo to the cloud in seconds? Easy. When you own the developer tools, the cloud platform, and the de-facto code sharing service in the industry, the “see how easy that was?” model is really paying off.

Dev Discussions: Phillip Carter

Last week, we talked to Isaac Abraham about F# from a C# developer’s perspective. This week, I’m excited to get more F# perspectives from Phillip Carter. Phillip is a busy guy at Microsoft but a big part of his role as a Program Manager is overseeing F# and its tooling.

In this interview, we talk to Phillip about Microsoft F# support, F# tooling (and how it might compare to C#), good advice for learning F#, and more.

Phillip Carter

From the build system to the tooling, is F# a first-class citizen in Visual Studio and other tools like C# is? If I’m a C# dev coming over, would I be surprised about things I am used to?

For anyone who uses Visual Studio to primarily edit code, then F# may take some getting used to, but most of the things you’re used to using are present. In that sense, it is first class. Project integration, semantic colors, IntelliSense, tooltips (more advanced than those in C#), some refactorings and analyzers, and so on.

C# has objectively more IDE features than F#, and the number of refactorings available in F# tooling absolutely pales in comparison to C# tooling. Some of these come down to how each language works, though, so it’s not quite so simple as “F# could just have XYZ features that C# has.” But overall, I think people tend to feel mostly satisfied by the kinds of tooling available to them.

It’s often claimed that F# needs less refactoring tools because the language tends to guide programmers into one way to do things, and the combination of the F# type system and language design lends itself towards the idiom, “if it compiles, it works right.” This is mostly true, though I do feel like there are entire classes of refactoring tools that F# developers would love to use, and they’d be unique to F# and functional programming.

What’s the biggest hurdle you see for people trying to learn F#, especially from OO languages like C# or Java?

I think that OO programming in mainstream OO languages tends to over-emphasize ceremony and lead to overcomplicated programs. A lot of people normalize this and then struggle to work with something that is significantly simpler and has less moving parts.

When you expect something to be complicated and it’s simple, this simplicity almost feels like it’s mocking you, like the language and environment is somehow smarter than you. That’s certainly what I felt when I learned F# and Haskell for the first time.

Beyond this, I think that the biggest remaining hurdles simply are the lack of curly braces and immutability. It’s important to recall that for many people, programming languages are strongly associated with curly braces and they can struggle to accept that a general-purpose programming language can work well without them.

C, C++, Java, C#, and JavaScript are the most widely used languages and they all come from the same family of programming languages. Diverging greatly from that syntax is a big deal and shouldn’t be underestimated. This syntax hurdle is less of a concern for Python programmers, and I’ve found that folks with Python experience are usually warmer to the idea of F# being whitespace sensitive. Go figure!

The immutability hurdle is a big one for everyone, though. Most people are trained to do “place-oriented programming”—put a value in a variable, put that variable in a list of variables, change the value in the variable, change the list of variables, and so on. Shifting the way you think about program flow in terms of immutability is a challenge that some people never overcome, or they prefer not to overcome because they hate the idea of it. It really does fundamentally alter how you write a program, and if you have a decade or more of training with one way to write programs, immutability can be a big challenge.

We’re seeing a lot of F# inspiration lately in C#, especially with what’s new in C# 9 (with immutability, records, for example). Where do you think the dividing line is between C# with FP and using F#? Is there guidance to help me make that decision?

I think the dividing line comes down to two things: what your defaults are and what you emphasize.

C# is getting a degree of immutability with records. But normal C# programming in any context is mutable by default. You can do immutable programming in C# today, and C# records will help with that. But it’s still a bit of a chore because the rest of the language is just begging you to mutate some variables.

They’re called variables for a reason! This isn’t a value judgement, though. It’s just what the defaults are. C# is mutable by default, with an increasing set of tools to do some immutability. F# is immutable by default, and it has some tools for doing mutable programming.

I think the second point is more nuanced, but also more important. Both C# and F# implement the .NET object system. Both can do inheritance, use accessibility modifiers on classes, and do fancy things with interfaces (including interfaces with default implementations). But how many F# programmers use this side of the language as a part of their normal programming? Not that many. OOP is possible in F#, but it’s just not emphasized. F# is squarely about functional programming first, with an object programming system at your disposal when you need it.

On the other hand, C# is evolving into a more unopinionated language that lets you do just about anything in any way you like. The reality is that some things work better than others (recall that C# is not immutable by default), but this lack of emphasis on one paradigm over the other can lead to vastly different codebases despite being written in the same language.

Is that okay? I can’t tell. But I think it makes identifying the answer to the question, “how should I do this?” more challenging. If you wanted to do functional programming, you could use C# and be fine. But by using a language that is fairly unopinionated about how you use it, you may find that it’s harder to “get it” when thinking functionally than if you were to use F#. Some of the principles of typed functional programming may feel more difficult or awkward because C# wasn’t built with them in at first. Not necessarily a blocker, but it still matters.

To read the entire interview, head on over to my site.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 Blazor / ASP.NET Core

⛅ The cloud

📔 C#

📗 F#

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #18: RC1 is here, the fate of .NET Standard, and F# with Isaac Abraham ]]> https://www.daveabrock.com/2020/09/26/dotnet-stacks-18/ 608c3e3df4327a003ba2fe60 Fri, 25 Sep 2020 19:00:00 -0500 .NET 5 RC1 is here

This week, Microsoft pushed out the RC1 release for .NET 5, which is scheduled to officially “go live” in early November. RC1 comes with a “go live” license, which means you get production support for it. With that, RC1 versions were released for ASP.NET Core and EF Core as well.

I’ve dug deep on a variety of new features in the last few months or so—I won’t  rehash them here. However, the links are worth checking out. For example, Richard Lander goes in-depth on C# 9 records and System.Text.Json.

The fate of .NET Standard

While there are many great updates to the upcoming .NET 5 release, a big selling point is at a higher level: the promise of a unified SDK experience for all of .NET. The idea is that you’ll be able to use one platform regardless of your needs—whether it’s Windows, Linux, macOS, Android, WebAssembly, and more. (Because of internal resourcing constraints, Xamarin will join the party in 2021, with .NET 6.)

Microsoft has definitely struggled in communicating a clear direction for .NET the last several years, so when you pair a unified experience with predictable releases and roadmaps, it’s music to our ears.

You’ve probably wondered: what does this mean for .NET Standard? The unified experience is great, but what about when you have .NET Framework apps to support? (If you’re new to .NET Standard, it’s more-or-less a specification where you can target a version of Standard, and all .NET implementations that target it are guaranteed to support all its .NET APIs.)

Immo Landwerth shed some light on the subject this week. .NET Standard is being thrown to the .NET purgatory with .NET Framework: it’ll still technically be around, and .NET 5 will support it—but the current version, 2.1, will be its last.

As a result, we have some new target framework names: net5.0, for apps that run anywhere, combines and replaces netcoreapp and netstandard. There’s also net5.0-windows (with Android and iOS flavors to come) for Windows-specific use cases, like UWP.

OK, so .NET Standard is still around but we have new target framework names. What should you do? With .NET Standard 2.0 being the last version to support .NET Framework, use netstandard2.0 for code sharing between .NET Framework and other platforms. You can use netstandard2.1 to share between Mono, Xamarin, and .NET Core 3.x, and then net5.0 for anything else (and especially when you want to use .NET 5 improvements and new language features). You’ll definitely want to check out the post for all the details.

What a mess: .NET Standard promised API uniformity and now we’re even having to choose between that and a new way of doing things. The post lays out why .NET Standard is problematic, and it makes sense. But when you’re trying to innovate at a feverish pace but still support customers on .NET Framework, the cost is complexity—and the irony is that with uniformity with .NET 5, that won’t apply when you have legacy apps to support.

Dev Discussions: Isaac Abraham

As much as we all love C#, there’s something that needs reminding from time to time: C# is not .NET. It is a large and important part of .NET, for sure, but .NET also supports two other languages: Visual Basic and F#. As for F#, it’s been gaining quite a bit of popularity over the last several years, and for good reason: it’s approachable, concise, and allows you to embrace a functional-first language while leveraging the power of the .NET ecosystem.

I caught up with Isaac Abraham to learn more about F#. After spending a decade as a C# developer, Isaac embraced the power of F# and founded Compositional IT, a functional-first consultancy. He’s also the author of Get Programming with F#: A Guide for .NET Developers.

Isaac Abraham

I know it’s more nuanced than this: but if you could sell F# to C# developers in a sentence or two, how would you do it?

F# really does bring the fun back into software development. You’ll feel more productive, more confident and more empowered to deliver high-quality software for your customers.

Functional programming is getting a lot of attention in the C# world, as the language is adopting much of its concepts (especially with C# 9). It’s a weird balance: trying to have functional concepts in an OO language. How do you feel the balance is going?

I have mixed opinions on this. On the one hand, for the C# dev it’s great—they have a more powerful toolkit at their disposal. But I would hate to be a new developer starting in C# for the first time. There are so many ways to do things now, and the feature (and custom operator!) count is going through the roof. More than that, I worry that we’ll end up with a kind of bifurcated C# ecosystem—those that adopt the new features and those that won’t, and worse still: the risk of losing the identity of what C# really is.

I’m interested to see how it works out. Introducing things like records into C# is going to lead to some new and different design patterns being used that will have to naturally evolve over time.

I won’t ask if C# will replace F#—you’ve eloquently written about why the answer is no. I will ask you this, though: is there a dividing line of when you should use C# (OO with functional concepts) or straight to F#?

I’m not really sure the idea of “OO with functional concepts” really gels, to be honest. Some of the core ideas of FP—immutability and expressions—are kind of the opposite of OO, which is all centered around mutable data structures, statements and side effects. By all means: use the features C# provides that come from the FP world and use them where it helps—LINQ, higher order functions, pattern matching, immutable data structures—but the more you try out those features to try what they can do without using OO constructs, the more you’ll find C# pulls you “back.” It’s a little like driving an Audi on the German motorway but never getting out of third gear.

My view is that 80% of the C# population today—maybe more—would be more productive and happier in F#. If you’re using LINQ, you favour composition over inheritance, and you’re excited by some of the new features in C# like records, switch expressions, tuples, and so on, F# will probably be a natural fit for you. All of those features are optimised as first-class citizens of the language, whilst things like mutability and classes are possible, but are somewhat atypical.

This also feeds back to your other question—I do fear that people will try these features out within the context of OO patterns, find them somehow limited, and leave thinking that FP isn’t worthwhile.

Let’s say I’m a C# programmer and want to get into F#. Is there any C# knowledge that will help me understand the concepts, or is it best to clear my mind of any preconceived notions before learning?

Probably the closest concept would be to imagine your whole program was a single LINQ query. Or, from a web app—imagine every controller method was a LINQ query. In reality it’s not like that, but that’s the closest I can think of. The fact that you’ll know .NET inside and out is also a massive help. The things to forget are basically the OO and imperative parts of the language: classes, inheritance, mutable variables, while loops, and statements. You don’t really use any of those in everyday F# (and believe me, you don’t need any of them to write standard line of business apps).

As an OO programmer, it’s so painful always having to worry about “the billion dollar mistake”: nulls. We can’t assume anything since we’re mutating objects all over the place and often throw up our hands and do null checks everywhere (although the language has improved in the last few versions). How does F# handle nulls? Is it less painful?

For F# types that you create, the language simply says: null isn’t allowed, and there’s no such thing as null. So in a sense, the problem goes away by simply removing it from the type system. Of course, you still have to handle business cases of “absence of a value,” so you create optional values—basically a value that can either have something or nothing. The compiler won’t let you access the “something” unless you first “check” that the value isn’t nothing.

So, you spend more time upfront thinking about how you model your domain rather than simply saying that everything and anything is nullable. The good thing is, you totally lose that fear of “can this value be null when I dot into it” because it’s simply not part of the type system. It’s kind of like the flow analysis that C# 8 introduced for nullability checks—but instead of flow analysis, it’s much simpler. It’s just a built-in type in the language. There’s nothing magical about it.

However, when it comes to interoperating with C# (and therefore the whole BCL), F# doesn’t have any special compiler support for null checks, so developers will often create a kind of “anti-corruption” layer between the “unsafe outside world” and the safe F# layer, which simply doesn’t have nulls. There’s also work going on to bring in support for the nullability surface in the BCL but I suspect that this will be in F# 6.

F#, and functional programming in general, emphasizes purity: no side effects. Does F# enforce this, or is it just designed with it in mind?

No, it doesn’t enforce it. There’s some parts of the language which make it obvious when you’re doing a side effect, but it’s nothing like what Haskell does. For starters, the CLR and BCL don’t have any notion of a side effect, so I think that this would difficult to introduce. It’s a good example of some of the design decisions that F# took when running on .NET—you get all the goodness of .NET and the ecosystem, but some things like this would be challenging to do. In fact, F# has a lot of escape hatches like this. It strongly guides you down a certain path, but it usually has ways that you can do your own thing if you really need to.

You still can (and people do) write entire systems that are functionally pure, and the benefits of pure functions are certainly something that most F# folks are aware of (it’s much easier to reason about and test, for example). It just means that the language won’t force you to do it.

What is your one piece of programming advice?

Great question. I think one thing I try to keep in mind is to avoid premature optimisation and design. Design systems for what you know is going to be needed, with extension points for what will most likely be required. You can never design for every eventuality, and you’ll sometimes get it wrong, that’s life—optimise for what is the most likely outcome.

To read the entire interview, head on over to my site.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 ASP .NET / Blazor

🚀 .NET Core

⛅ The cloud

📔 C#

📗 F#

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Dev Discussions - Phillip Carter ]]> https://www.daveabrock.com/2020/09/26/dev-discussions-phillip-carter/ 608c3e3df4327a003ba2fe5f Fri, 25 Sep 2020 19:00:00 -0500 This is the full interview from my discussion with Phillip Carter in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today!

Last week, we talked to Isaac Abraham about F# from a C# developer’s perspective. This week, I’m excited to get more F# perspectives from Phillip Carter. Phillip is a busy guy at Microsoft but a big part of his role as a Program Manager is overseeing F# and its tooling.

In this interview, we talk to Phillip about Microsoft F# support, F# tooling (and how it might compare to C#), good advice for learning F#, and more.

Phillip Carter

Can you talk about what you’ve done at Microsoft, and how you landed on F#?

I spent some time bouncing around on various projects related to shipping .NET Core 1.0 for my first year-ish at Microsoft. A lot of it was me doing very little in the early days, since there was little for an entry-level PM to do. But I did find that that the Docs team needed help and so I ended up writing a good portion of the .NET docs that exist on docs.microsoft.com today. Some of the information architecture I contributed to is still present there today.

I got the F# gig because I had an interest in F# and the current PM was leaving for a different team. Rather than let it sit without a PM for an indeterminate amount of time, everyone agreed that I should take the position. Always better to have someone who’s interested in the space assume some responsibility than have nobody do it, right? I’ve been working on F# at Microsoft ever since.

Do you feel F# gets the recognition and attention at Microsoft it deserves?

This is always a fun question.

Many F# programmers would emphatically proclaim, “No!” and it’s of course a meme that Microsoft Doesn’t Care About F# or whatever. But the reality is that like all other functional programming languages, F# is a niche in terms of adoption, and it is likely to stay a niche if you compare it to the likes of C#, Java, C++, Python, and JavaScript.

As a fan of programming languages in general, I feel like tech companies (like Microsoft) emphasize platforms far more than any given language. I find it kind of funny, because I’ve already seen multiple platforms come and go in our industry—but all the languages I’ve been involved with have only become more permanent and grown during that same timespan. This is kind of sidestepping the question, but it really is how I feel about the topic.­­­­­­

Being a niche language doesn’t mean something isn’t valuable. To the contrary, there are many large, paying customers who use F# today, and that is only expected to grow as more organizations incorporate and grow software systems and hiring people who like functional programming.  For example, F# powered Azure’s first billion-dollar startup (Jet.com) back in 2016. Could they have used C#? Sure. But they didn’t. Did F# cause them to use Azure? Maybe. They evaluated Azure and Google Cloud and decided on Azure for a variety of reasons, technological compatibility perhaps being one of them. But these questions don’t really matter.

From Microsoft’s perspective, we want customers to use the tech they prefer, not the tech we prefer, and be successful with it. If that’s F#, then we want to make sure that they can use our developer tools and platforms and have a good time doing it. If they can’t, then we want to do work to fix it. If you look at official statements on things like our languages, we’re fairly unopinionated and encourage people to try the language and environment that interests them the most and works the best for their scenario.

All this context matters when answering this question. Yes, I think Microsoft gives F# roughly the attention and love it deserves, certainly from an engineering standpoint. I don’t think any other major company would do something like pay an employee to fly out to a customer’s office, collect problems they are having with tooling for a niche programming language, and then have a team refocus their priorities to fix a good set of those issues all in time for a major release (in case it wasn’t clear, I am speaking from experience). From the customer standpoint, this is the kind of support that they would expect.

From a “marketing” standpoint, I think more attention should be paid to programming languages in general, and that F# should be emphasized more proportionally. But the reality is that there are a lot more platforms than there are languages, so I don’t see much of a change in our industry. I do think that things have improved a lot for F# on this front in the past few years, and I’ve found that for non-engineering tasks I’ve had to work less to get F# included in something over time.

This year alone, we’ve had four blog posts about F# updates for F# 5, with a few more coming along. And of course F# also has dedicated pages on our own .NET website, dedicated tutorials for newcomers, and a vast library of documentation that’s a part of the .NET docs site. But if people are waiting for Microsoft’s CEO to get on stage, proclaim that OOP is dead and we all need to do FP with F#, they shouldn’t be holding their breath.

This also speaks to a broader point about F# and some Microsoft tech. One of the reasons why we pushed everything into the open and worked to ensure that cross-platform worked well was because we wanted to shift the perception of our languages and tech stacks.

People shouldn’t feel like they need Microsoft to officially tell them to use something. They should feel empowered to investigate something like F# in the context of their own work, determine its feasibility for themselves, and present it to their peers.

I think it’s Microsoft’s responsibility to ensure that potential adopters are armed with the correct information and have a strong understanding that F# and .NET are supported products. And it’s also Microsoft’s responsibility to communicate updates and ensure that F# gets to “ride the wave” of various marketing events for .NET. But I really, truly want people to feel like they don’t need Microsoft for them to be successful with using and evangelizing F#. I think it’s critical that the power dynamic when concerning F# and .NET usage in any context balances out more between Microsoft and our user base. This isn’t something that can come for free, and does require active participation of people like me in communities rather than taking some lame ivory tower approach.

From the build system to the tooling, is F# a first-class citizen in Visual Studio and other tools like C# is? If I’m a C# dev coming over, would I be surprised about things I am used to?

This is a good question, and the answer to this is: it depends on what you do in Visual Studio. All developers are different, but I have noticed a stark contrast among the C# crowd: those who use visual designer tooling and those who do not.

For those who use visual designer tooling heavily, F# may not be to your liking. C# and VB are the only two Visual Studio languages that have extensive visual designer tooling support, and if you rely on or prefer these tools, then you’ll find F# to be lacking. F# has an abundance of IDE tooling for editing code and managing your codebase, but it does not plug into things like the EF6 designer, Code Map, WinForms designer, and so on.

For anyone who uses Visual Studio to primarily edit code, then F# may take some getting used to, but most of the things you’re used to using are present. In that sense, it is first class. Project integration, semantic colors, IntelliSense, tooltips (more advanced than those in C#), some refactorings and analyzers, and so on.

C# has objectively more IDE features than F#, and the number of refactorings available in F# tooling absolutely pales in comparison to C# tooling. Some of these come down to how each language works, though, so it’s not quite so simple as “F# could just have XYZ features that C# has.” But overall, I think people tend to feel mostly satisfied by the kinds of tooling available to them.

It’s often claimed that F# needs less refactoring tools because the language tends to guide programmers into one way to do things, and the combination of the F# type system and language design lends itself towards the idiom, “if it compiles, it works right.” This is mostly true, though I do feel like there are entire classes of refactoring tools that F# developers would love to use, and they’d be unique to F# and functional programming.

What’s the biggest hurdle you see for people trying to learn F#, especially from OO languages like C# or Java?

I think that OO programming in mainstream OO languages tends to over-emphasize ceremony and lead to overcomplicated programs. A lot of people normalize this and then struggle to work with something that is significantly simpler and has less moving parts.

When you expect something to be complicated and it’s simple, this simplicity almost feels like it’s mocking you, like the language and environment is somehow smarter than you. That’s certainly what I felt when I learned F# and Haskell for the first time.

Beyond this, I think that the biggest remaining hurdles simply are the lack of curly braces and immutability. It’s important to recall that for many people, programming languages are strongly associated with curly braces and they can struggle to accept that a general-purpose programming language can work well without them.

C, C++, Java, C#, and JavaScript are the most widely used languages and they all come from the same family of programming languages. Diverging greatly from that syntax is a big deal and shouldn’t be underestimated. This syntax hurdle is less of a concern for Python programmers, and I’ve found that folks with Python experience are usually warmer to the idea of F# being whitespace sensitive. Go figure!

The immutability hurdle is a big one for everyone, though. Most people are trained to do “place-oriented programming”—put a value in a variable, put that variable in a list of variables, change the value in the variable, change the list of variables, and so on. Shifting the way you think about program flow in terms of immutability is a challenge that some people never overcome, or they prefer not to overcome because they hate the idea of it. It really does fundamentally alter how you write a program, and if you have a decade or more of training with one way to write programs, immutability can be a big challenge.

As a C# developer, in your opinion: what’s the best way to learn F#?

I think the best way is to start with a console app and work through a problem that requires the use of F# types—namely records and unions—and processing data that is modeled by them.

A parser for a Domain-Specific Language (DSL) is a good choice, but a text-based game could also work well. From there, graduating to a web API or web app is a good idea. The SAFE stack combines F# on the backend with F# on the frontend via Fable—a wonderful F# to JavaScript compiler—to let you build an app in F# in multiple contexts. WebSharper also allows you to accomplish this in a more opinionated way (it’s a supported product, too). Finally, Bolero is a newer tech that lets you build WebAssembly apps using some of the more infrastructural Blazor components. Some rough edges, but since WebAssembly has the hype train going for it, it’s not a bad idea to check it out.

Although this wasn’t your question, I think a wonderful way to learn F# is to do some basic data analysis work with F# in Jupyter Notebooks or just an F# script with F# Interactive. This is especially true for Python folks who work in more analytical spaces, but I think it can apply to any C# programmer looking to develop a better understanding of how to do data science—the caveat being that most C# programmers don’t use Jupyter, so there would really be two new things to learn.

What are you most excited about with F# 5?

Firstly, I’m most excited about shipping F# 5. It’s got a lot of features that people have been wanting for a long time, and we’ve been chipping away at it for nearly a year now. Getting this release out the door is something I’m eagerly awaiting.

If I had to pick one feature I like the most, it’s the ability to reference packages in F# scripts. I do a lot of F# scripting, and I use the mechanism in Jupyter Notebooks all the time, too. It just kind of feels like magic that I can simply state a package I want to use, and just use it. No caveats, no strings attached. In Python, you need to acquire packages in an unintuitive way due to weirdness with how notebooks and your shell process and your machine’s python installation work. It’s complexity that simply doesn’t exist in the F# world and I absolutely love it.

What’s on the roadmap past F# 5? Any cool features in the next few releases?

Currently we don’t have much of a roadmap set up for what comes next. There are still a few in-progress features, two of them being rather large: a task { } computation expression in FSharp.Core that rewrites into an optimized state machine, and a revamp of the F# constraint system to allow specifying static constraints in type extensions.

The first one is a big deal for anything doing high-performance work on .NET. The second one is a big deal for lots of general F# programming scenarios, especially if you’re in the numerical space and you want to define a specialized arithmetic for types that you’re importing from somewhere else. The second one will also likely fix several annoying bugs related to the F# constraint system and generally make library authors who use that system heavily much happier.

Beyond this, we’ll have to see what comes up during planning. We’re currently also revamping the F# testing system in our repository to make it easier for contributors to add tests and generally modernize the system that tens of thousands of tests use to execute today. We’re also likely to start some investigative work to rebase Visual Studio F# tooling on LSP and working with the F# community to use a single component in both VS and VSCode. They already have a great LSP implementation, but a big merging of codebases and features needs to happen there. It’ll be really exciting, but we’re not quite sure what the work “looks like” yet.

What’s something about working on the F# team that you wish the community knew, but probably doesn’t?

I think a lot of folks underestimate just how much work goes into adding a new language feature. Let’s consider something like nameof, which was requested a long time ago. Firstly, there needed to be a design for the obvious behaviors. But then there are non-obvious ones, like nameof when used as a pattern, or what the result of taking the name of an operator should be (there are two kinds of names for an operator in F#).

Language features need a high degree of orthogonality—they should work well with every other feature. And if they don’t, there needs to be a very good reason.

Firstly, that means a very large test matrix that takes a long time to write, but you also run into quirks that you wouldn’t initially anticipate. For example, F# has functions like typeof and typedefof that accept a type as a parameterized type argument, not an input to a function. Should nameof behave like that when taking the name of a type parameter? That means there are now two syntax forms, not just one. Is that the right call? We thought so, but it took a few months to arrive at that decision.

Another quirk in how it differs from C# is that you can’t take a fully-qualified name to an instance member as if it were static.  Why not? Because nameof would be the only feature in all of F# that allows that kind of qualification. Special cases like this might seem fine in isolation, but if you have every language feature deciding to do things its own way rather than consider how similar behaviors work in the language, then you end up with a giant bag of parlor tricks with no ability to anticipate how you can or cannot use something.

Then there are tooling considerations: does it color correctly in all ways you’d use it? If I have a type and a symbol with the same name and I use nameof on it, what does the editor highlight? What name is taken? What is renamed when I invoke the rename refactoring? Does the feature correctly deactivate if I explicitly set my LangVersion to be lower than F# 5? And so on.

These things can be frustrating for people because they may try a preview, see that a feature works great for them now, and wonder why we haven’t just shipped it yet. Additionally, if it’s a feature that was requested a long time ago, there seems to be some assumption that because a feature was requested a long time ago, it should be “further along” in terms of design and implementation. I’m not sure where these kinds of things come from, but the reason why things take long is because they are extremely difficult and require a lot of focused time to hammer out all the edge cases and orthogonality considerations.

Can you talk a little about the SAFE Stack and how it can be used in ASP.NET Core applications?

The SAFE stack is a great combination of F# technologies—minus the A for Azure or AWS, I guess—to do full-stack F# programming. It wasn’t the first to achieve this—I believe WebSharper was offering similar benefits many years ago—but by being a composition of various open source tools, it’s unique.

The S and A letters mostly come into play with ASP.NET Core. The S stands for Saturn, which is an opinionated web framework that uses ASP.NET Core under the hood. It calls into a more low-level library called Giraffe, and if you want to use Giraffe instead (GAFE), you can. The A is more of a stand in for any cloud that can run ASP.NET Core, or I guess it could just mean A Web Server or something. It’s where ASP.NET Core runs under the hood here. The F and E are for Fable and Elmish, which are technologies for building web UIs with F#.

I won’t get into the details of how to use SAFE, but I will say that what I love about it is that all the technologies involved are entirely independent of one another and square F# technologies. Yes, they use .NET components to varying degrees and rely on broader ecosystems to supply some core functionality. But they are technologies are made by and for F# developers first.

This is a level of independence for the language that I think is crucial for the long-term success of F#. People can feel empowered to build great tech intended mainly for F# programmers, combine that tech with other tech, and have a nice big party in the cloud somewhere. What SAFE represents to me is more important than any of the individual pieces of tech it uses.

We’re seeing a lot of F# inspiration lately in C#, especially with what’s new in C# 9 (with immutability, records, for example). Where do you think the dividing line is between C# with FP and using F#? Is there guidance to help me make that decision?

I think the dividing line comes down to two things: what your defaults are and what you emphasize.

C# is getting a degree of immutability with records. But normal C# programming in any context is mutable by default. You can do immutable programming in C# today, and C# records will help with that. But it’s still a bit of a chore because the rest of the language is just begging you to mutate some variables.

They’re called variables for a reason! This isn’t a value judgement, though. It’s just what the defaults are. C# is mutable by default, with an increasing set of tools to do some immutability. F# is immutable by default, and it has some tools for doing mutable programming.

I think the second point is more nuanced, but also more important. Both C# and F# implement the .NET object system. Both can do inheritance, use accessibility modifiers on classes, and do fancy things with interfaces (including interfaces with default implementations). But how many F# programmers use this side of the language as a part of their normal programming? Not that many. OOP is possible in F#, but it’s just not emphasized. F# is squarely about functional programming first, with an object programming system at your disposal when you need it.

On the other hand, C# is evolving into a more unopinionated language that lets you do just about anything in any way you like. The reality is that some things work better than others (recall that C# is not immutable by default), but this lack of emphasis on one paradigm over the other can lead to vastly different codebases despite being written in the same language.

Is that okay? I can’t tell. But I think it makes identifying the answer to the question, “how should I do this?” more challenging. If you wanted to do functional programming, you could use C# and be fine. But by using a language that is fairly unopinionated about how you use it, you may find that it’s harder to “get it” when thinking functionally than if you were to use F#. Some of the principles of typed functional programming may feel more difficult or awkward because C# wasn’t built with them in at first. Not necessarily a blocker, but it still matters.

What I would say is that if you want to do functional programming, you will only help yourself by learning F# when learning FP. It’s made for doing functional programming on .NET first and foremost, and as a general guideline it’s a good idea to use tools that were made for a specific purpose if you are aligned with that purpose. You may find that you don’t like it, or that you thought some things were cool but you’re ultimately happier with taking what you learned and writing C# code in a more functional style from now on. That’s great, and you shouldn’t feel any shame in deciding that F# isn’t for you. But it’ll certainly making writing functional C# easier, since you’ll already have a good idea of how to generally approach things.

Something I ask everyone: what is your one piece of programming advice?

The biggest piece of advice I would give people is to look up the different programming paradigms, functional being one of them, and try out a language some of them.

Most programmers are used to an ALGOL-derived language, and although they are great languages, they all tend to encourage the same kind of thought process for how you write programs. Programming can be a tool for thought, and languages from different backgrounds encourage different ways of thinking about solving problems with programming languages. I believe that this can make people better programmers even if they never use F# or other languages outside of the mainstream ones.

Additionally, all languages do borrow from other ones to a degree, and understanding different languages can help you master new things coming into mainstream languages.

You can reach Phillip Carter on Twitter.

]]>
<![CDATA[ Get to know your .NET streamers ]]> https://www.daveabrock.com/2020/09/24/dot-net-streamers/ 608c3e3df4327a003ba2fe5e Wed, 23 Sep 2020 19:00:00 -0500 I’ve been geeking out on a lot of Twitch streams lately. I think it’s a fantastic way to learn and engage with other folks. I find I’ll learn more by watching someone code in a natural environment and asking questions as you go. And with most streams getting archived, you can consume them as they fit your schedule.

I keep track of a few, but knowing I was missing some folks I sent out a tweet:

As a result of my responses, I thought I’d keep a collection of awesome .NET streams for everyone to reference. If I left out your favorite stream, it wasn’t intentional—please respond to the tweet or leave a comment below to get added to the list. I just ask that you suggest streams that are somewhat active (every few months or so).

The Live Coders

For a good first stop, hit up The Live Coders, a group of tech streamers across all platforms (not just .NET). You can browse the members page for folks tagged with .NET categories (or learn more about non-.NET tech).

General .NET topics

Microsoft

]]>
<![CDATA[ The .NET Stacks #17: EF Core 5, Blazor + CSS, and community! ]]> https://www.daveabrock.com/2020/09/19/dotnet-stacks-17/ 608c3e3df4327a003ba2fe5d Fri, 18 Sep 2020 19:00:00 -0500 Just a reminder that in your methods, try to write less than 914 “using var” statements.

We’re talking about a lot of little things this week:

  • EF Core 5 is done
  • Blazor CSS isolation
  • The curious case of unit testing
  • September is F#-tember at The .NET Stacks
  • Community links

EF Core 5 is done

This week, Arthur Vickers proclaimed that Entity Framework Core 5.0 is done, pending any issues or bug fixes. Arthur says the team has also spent the last two weeks squashing bugs, writing better exception messages, and improving the docs.

You can check out the official plan, but I’ll be looking forward to the following features:

You can check out the daily builds today.

Blazor CSS isolation

With the latest .NET preview, the Blazor team released CSS isolation (among many other things). With limited documentation available, I wrote about it.

The gist: CSS isolation allows you to scope your CSS to your components. Instead of loading all your styles unnecessarily and incurring CSS bleed when dealing with multiple components and libraries, you can say: I only want these styles to be associated with my component. There’s a lot of flexibility, too—you can use the ::deep combinator in your scoped CSS to pass styles down to child components and working with CSS preprocessors is a snap.

CSS isolation in Blazor is a build-time step—this means there’s no reliance on JavaScript or .NET code, you can easily integrate with existing tooling, and there’s no performance hit if you aren’t using it. However, this also means you’re forced to recompile to see any new CSS changes. This will force you into using dotnet watch if you haven’t already—hitting F5 in Visual Studio every time is no way to go through life.

I’ll be writing the official Microsoft reference doc on it, so if you want anything covered that isn’t in my post or the GitHub issue for the doc, let me know.

The curious case of unit testing

My thinking on unit testing has been shifting a bit in the last six months or so. The timing couldn’t be better: I’m seeing a lot of chatter about unit testing both inside and outside of the .NET developer community.

Back in early July, Alexey Golub wrote a popular piece called Unit Testing is Overrated. It aligned with a lot of what I was thinking about—placing your primary focus on unit tests isn’t always valuable. This is not to say you shouldn’t value them, or champion them, or appreciate them. Quite the opposite. It’s when we decouple so much while saying “we need to make the code testable” where, in the end, the only value is to make unit testing work. In the end, are we just testing our mocks?

Instead of spending a ridiculous amount of time setting up mocks that don’t replicate much of a complex software system, why not rely more on integration tests? This opinion would be blasphemy just ten (or even five) years ago. After all, unit tests should be quick, repeatable, and pure (no side effects). With the rise of containerization and better cloud infrastructure, the “integration tests are slow and flaky” argument is less of an issue.

To that end: last month, Khalid Abuhakmeh wrote Secrets of a .NET Professional, in which he writes this (which led to a spirited Twitter discussion):

The systems we build have many complex external dependencies. The hubris of thinking we could mock away decades of investment in a database, web service, or any other technology has been the source of many frustrating arguments. In my opinion, one good integration test is worth 1,000 unit tests… With the rise of containerization, the pain of writing integration tests has never been lower.

For me, my thought is that unit testing is crucial for testing and verifying core (and pure) business logic. But you need to test against your dependencies, and many times the best bang for your buck comes with integration testing—as long as the tests are reasonably fast and cheap. As for me, I’m no longer obsessing over testing every single layer of an MVC app (as an aside, Andrew Lock wrote a nice post that questions the value of unit testing APIs and MVC controllers).

Our industry gets dogmatic, so opinions like these can get heated. But to me, it doesn’t really seem like a controversial opinion to take a nuanced approach to testing so we can find the most value in our hectic days. Containerization has revolutionized our industry—so let’s take advantage of it.

September is F#-tember at The .NET Stacks

Do you work in F#? Or are you a C# developer and intrigued by its possibilities but haven’t found time to dive in? As C# has “borrowed” a lot of functional programming paradigms, you might be wondering about tradeoffs between C# “with functional bits” and straight F#. I’ve got you covered.

Next week, I’m interviewing Isaac Abraham, author of Get Programming with F#. Soon after, I’ll post an interview with Phillip Carter, the PM for F# at Microsoft. Stay tuned for some great conversations.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

We had three community standups: Languages & Runtime explores miscellaneous topics, Machine Learning talks SciSharp, and ASP.NET talks about Microsoft.Identity.Web.

😎 Blazor

🚀 .NET Core

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Dev Discussions - Isaac Abraham ]]> https://www.daveabrock.com/2020/09/19/dev-discussions-isaac-abraham/ 608c3e3df4327a003ba2fe5c Fri, 18 Sep 2020 19:00:00 -0500 This is the full interview from my discussion with Isaac Abraham in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today!

As much as we all love C#, there’s something that needs reminding from time to time: C# is not .NET. It is a large and important part of .NET, for sure, but .NET also supports two other languages: Visual Basic and F#. As for F#, it’s been gaining quite a bit of popularity over the last several years, and for good reason: it’s approachable, concise, and allows you to embrace a functional-first language while leveraging the power of the .NET ecosystem.

I caught up with Isaac Abraham to learn more about F#. After spending a decade as a C# developer, Isaac embraced the power of F# and founded Compositional IT, a functional-first consultancy. He’s also the author of Get Programming with F#: A Guide for .NET Developers.

Isaac Abraham

Can you walk us through your career, what you’re doing now, and how you landed on F#?

I scraped into university having nearly flunked my senior school exams and took a degree in Computer Systems—it was a very practical software development degree, but there was basically minimal computer science. To this day, I don’t know how things like Big O notation works or how to implement a red-black tree—and I’m pretty woeful at maths.

I spent over 10 years as a C#/SQL enterprise-y developer after graduating, which was just when .NET 1.0 was launching. I went through the whole C#/OO developer journey—reading up on things like SOLID, doing the whole TDD thing religiously, and using IoC containers. It’s funny: until I learned about design patterns I had no real idea of how to “apply” OO in a coherent sense, which looking back is remarkable when I think about it now. I worked easily for over five years writing what I now know to be more or less procedural code in OO languages, but no one ever flagged it.

I actually think this is quite common in the industry. After working at some enterprise organisations and .NET consultancies, I worked as contractor in finance and ended up as a partner at a small Azure big data consultancy (both Azure and big data were in their infancy back then) before founding Compositional IT (CIT).

I started getting into F# when I started building a rules engine for an investment bank. We wrote it in C# but I realised afterwards that we had unknowingly been writing a composable functional pipeline in an OO language, creating one-method interfaces and chaining calls together using decorators and stuff. This was around 2010, so F# had just been included “in the box” in Visual Studio 2010—so I started playing around with it as a hobby, partly because it looked interesting but also because I had grown frustrated with the amount of rigour and code needed in C# in order to ensure the quality of our software. I also started going to the London F# meetup which was a real eye opener: a different community with new ways of doing things in .NET. I also learned about open source—until then, I had no clue what GitHub was.

After I moved to the Azure consultancy, I started using F# more. Primarily, using it as a “glue” language to do things like scripts and data analysis, but I realised after a while that it was just a really flexible general purpose language that let me get stuff done more quickly—so I started writing all sorts in it.

I was surprised. I had thought it was simply for maths or data scraping but found it equally adept at web applications or cloud services in Azure. Once I’d gotten over the initial hurdle of resisting the OO muscle memory and learned techniques to achieve the same goals that I knew in C#, it was completely fun and empowering—as if I had started programming again and learning something new and exciting.

Just before I moved to Germany, I founded CIT—the goal was simple: functional programming for everyday software. Not maths and science or big data, but line of business apps, database systems, ETL processes, and so on. We’ve been going for over 4 years now. We provide training and coaching for teams looking to use F#, for consultancy and development services. We do all of our software development on .NET in F#, from full-stack web apps using the SAFE Stack to data transformation engines to cloud services.

I know it’s more nuanced than this: but if you could sell F# to C# developers in a sentence or two, how would you do it?

F# really does bring the fun back into software development. You’ll feel more productive, more confident and more empowered to deliver high-quality software for your customers.

Functional programming is getting a lot of attention in the C# world, as the language is adopting much of its concepts (especially with C# 9). It’s a weird balance: trying to have functional concepts in an OO language. How do you feel the balance is going?

I have mixed opinions on this. On the one hand, for the C# dev it’s great—they have a more powerful toolkit at their disposal. But I would hate to be a new developer starting in C# for the first time. There are so many ways to do things now, and the feature (and custom operator!) count is going through the roof. More than that, I worry that we’ll end up with a kind of bifurcated C# ecosystem—those that adopt the new features and those that won’t, and worse still: the risk of losing the identity of what C# really is.

I’m interested to see how it works out. Introducing things like records into C# is going to lead to some new and different design patterns being used that will have to naturally evolve over time.

I won’t ask if C# will replace F#—you’ve eloquently written about why the answer is no. I will ask you this, though: is there a dividing line of when you should use C# (OO with functional concepts) or straight to F#?

I’m not really sure the idea of “OO with functional concepts” really gels, to be honest. Some of the core ideas of FP—immutability and expressions—are kind of the opposite of OO, which is all centered around mutable data structures, statements and side effects. By all means: use the features C# provides that come from the FP world and use them where it helps—LINQ, higher order functions, pattern matching, immutable data structures—but the more you try out those features to try what they can do without using OO constructs, the more you’ll find C# pulls you “back.” It’s a little like driving an Audi on the German motorway but never getting out of third gear.

My view is that 80% of the C# population today—maybe more—would be more productive and happier in F#. If you’re using LINQ, you favour composition over inheritance, and you’re excited by some of the new features in C# like records, switch expressions, tuples, and so on, F# will probably be a natural fit for you. All of those features are optimised as first-class citizens of the language, whilst things like mutability and classes are possible, but are somewhat atypical.

This also feeds back to your other question—I do fear that people will try these features out within the context of OO patterns, find them somehow limited, and leave thinking that FP isn’t worthwhile.

Let’s say I’m a C# programmer and want to get into F#. Is there any C# knowledge that will help me understand the concepts, or is it best to clear my mind of any preconceived notions before learning?

Probably the closest concept would be to imagine your whole program was a single LINQ query. Or, from a web app—imagine every controller method was a LINQ query. In reality it’s not like that, but that’s the closest I can think of. The fact that you’ll know .NET inside and out is also a massive help. The things to forget are basically the OO and imperative parts of the language: classes, inheritance, mutable variables, while loops, and statements. You don’t really use any of those in everyday F# (and believe me, you don’t need any of them to write standard line of business apps).

With your experience bringing people and organizations on to F#, what have teams done to make it successful?

I’m currently writing a series right now on this very subject. In some ways, it’s no different to adopting any technology or tool. For example, “How can I introduce unit testing in my team?” The blockers are very rarely technical—it’s nearly always organisational or process-related. The main tips I can give is to plan your adoption thoroughly and thoughtfully, and to show that you believe in F#. Don’t do it as a one person endeavour—have at least two or three developers work on it. And ideally get a coach to help mentor your team as they start using F# to avoid common pitfalls.

Most people at the management level will support a team adopting any new technology as long as you can show the benefits of it and (importantly) address any concerns that individuals may have, and F# is no different in that regard. Ironically, I often see F# suggested to teams from the management level and not the other way around!

Lastly, there are so many great learning resources available nowadays that simply weren’t available even 5 years ago, whether that’s paper, online courses, YouTube, Slack, and so on—whilst .NET Core has F# built-in, so you don’t even need to install anything to start trying it out.

As an OO programmer, it’s so painful always having to worry about “the billion dollar mistake”: nulls. We can’t assume anything since we’re mutating objects all over the place and often throw up our hands and do null checks everywhere (although the language has improved in the last few versions). How does F# handle nulls? Is it less painful?

For F# types that you create, the language simply says: null isn’t allowed, and there’s no such thing as null. So in a sense, the problem goes away by simply removing it from the type system. Of course, you still have to handle business cases of “absence of a value,” so you create optional values—basically a value that can either have something or nothing. The compiler won’t let you access the “something” unless you first “check” that the value isn’t nothing.

So, you spend more time upfront thinking about how you model your domain rather than simply saying that everything and anything is nullable. The good thing is, you totally lose that fear of “can this value be null when I dot into it” because it’s simply not part of the type system. It’s kind of like the flow analysis that C# 8 introduced for nullability checks—but instead of flow analysis, it’s much simpler. It’s just a built-in type in the language. There’s nothing magical about it.

However, when it comes to interoperating with C# (and therefore the whole BCL), F# doesn’t have any special compiler support for null checks, so developers will often create a kind of “anti-corruption” layer between the “unsafe outside world” and the safe F# layer, which simply doesn’t have nulls. There’s also work going on to bring in support for the nullability surface in the BCL but I suspect that this will be in F# 6.

F#, and functional programming in general, emphasizes purity: no side effects. Does F# enforce this, or is it just designed with it in mind?

No, it doesn’t enforce it. There’s some parts of the language which make it obvious when you’re doing a side effect, but it’s nothing like what Haskell does. For starters, the CLR and BCL don’t have any notion of a side effect, so I think that this would difficult to introduce. It’s a good example of some of the design decisions that F# took when running on .NET—you get all the goodness of .NET and the ecosystem, but some things like this would be challenging to do. In fact, F# has a lot of escape hatches like this. It strongly guides you down a certain path, but it usually has ways that you can do your own thing if you really need to.

You still can (and people do) write entire systems that are functionally pure, and the benefits of pure functions are certainly something that most F# folks are aware of (it’s much easier to reason about and test, for example). It just means that the language won’t force you to do it.

Do you feel in the overall .NET community, that F# gets enough attention from Microsoft?

I’d always love for there to be more F# coverage! If I think back to when I first started looking into F# though, it’s come a long way—there’s more awareness of this language out there, especially from some of the bigger names in the .NET team. People are starting to realise that F# isn’t some crazy maths-and-science language, but it’s a general purpose language—it’s just that it takes a different approach towards how you organise your code. It still frustrates me to occasionally see people refer to use C# and .NET interchangeably, of course, but it’s getting better.

What is your one piece of programming advice?

Great question. I think one thing I try to keep in mind is to avoid premature optimisation and design. Design systems for what you know is going to be needed, with extension points for what will most likely be required. You can never design for every eventuality, and you’ll sometimes get it wrong, that’s life—optimise for what is the most likely outcome.

]]>
<![CDATA[ The .NET Stacks #16: App trimming and more on ML.NET ]]> https://www.daveabrock.com/2020/09/12/dotnet-stacks-16/ 608c3e3df4327a003ba2fe5b Fri, 11 Sep 2020 19:00:00 -0500 Welcome to 2020, where Thomas Running is finally getting the credit he deserves.

With the .NET 5 last preview hitting last week, you’d think we wouldn’t have much to talk about! Oh, no—there is always something to talk about.

This week:

  • App trimming in .NET 5
  • More with Luis Quintanilla on ML.NET
  • Community roundup

App trimming in .NET 5

With .NET 5 preview 8 shipped last week, Microsoft has been pushing a lot of articles about the performance improvements. This week was no exception as Sam Spencer discussed how app trimming will work in .NET 5. It’s not as sexy as Blazor, but crucially important.

A huge driver for .NET Core over .NET Framework, among several other things, is self-contained deployments. There’s no dependency on a framework installation, so setup is easier but the size of the apps is much larger. With .NET Core 3, Microsoft introduced app trimming, or assembly linking, that optimizes your deployment size. Long story short, it only packages assemblies that are used.

That’s great if you forget to remove dependencies you’re no longer using, but the real value comes in opening those assemblies. That’s what’s coming with .NET 5: they’ve expanded app trimming to remove types and members unused by your application as well. This seems both exciting and scary: it’s quite risky with extensive testing required and as a result, Spencer notes it’s an experimental feature not ready for large adoption yet. With that in mind, the default trimming in .NET is assembly-level, but you can use a <TrimMode>Link</TrimMode> setting in your project file to enable member-level trimming.

This is not setting and forgetting, though: it only does a static analysis of your code and, as you likely know, .NET Core depends heavily on dynamic code patterns like reflection—especially for dependency injection and object serialization. Because the trimmer can’t discover types at runtime that leverage these patterns, it could lead to disastrous results. What if you dynamically discover an interface at runtime and the analyzer doesn’t find it, then the types are trimmed out? Instead of trying to resolve all your problems for you, which will never be a perfect process, the approach in .NET 5 is to both provide feedback to you if the trimmer isn’t sure about something, and also annotations for you to use that will help the trimmer—especially when dealing with these dynamic behaviors.

Speaking of reflection, do you remember when we talked about source generators last week? The .NET team is looking at implementing source generators to move functionality from reflection at runtime to build-time. This speeds up performance but also allows the trimmer to better analyze your code—and with a higher level of accuracy.

A big winner here is Blazor—with the release in May, Blazor now utilizes .NET 5 instead of Mono (and with it, it’s increased size). Blazor WebAssembly is still a little heavy and this will go a long way to making the size more manageable.

Until native AOT comes to .NET—and boy, we’re impatiently waiting—this will hopefully clear the path for its success. I’m cautiously optimistic. I know app trimming has been a bumpy road this far, but the annotations might provide for a better experience—and allow us to confidently trim our apps without sacrificing reliability.

Take a look at the introductory article as well as a deep dive on customization.

Dev Discussions: More with Luis Quintanilla

Last week, we began a conversation with Luis Quintanilla about ML.NET. This week, we discussing when to use ML.NET over something like Azure Cognitive Services, practical uses, and more.

Luis Quintanilla

Where is the dividing line for when I should use machine learning, or use Azure Cognitive Services?

This is a really tough one to answer because there’s so many ways you can make the comparison. Both are great products and in some areas, the lines can blur. Here are a few of them that I think might be helpful.

Custom Training vs Consumption

If you’re looking to add machine learning into your application to solve a fairly generic problem, such as language translation or identifying popular landmarks, Azure Cognitive Services is an excellent option. The only knowledge you need to have is how to call an API over HTTP.

Azure Cognitive Services provides a set of robust, state-of-the-art, pretrained models for a wide variety of scenarios. However, there’s always edge cases. Suppose you have a set of documents that you want to classify and the terminology in your industry is rare or niche. In that scenario, the performance of Azure Cognitive Services may vary because the pretrained models most likely have never encountered some of your industry terms. At that point, training your own model may be a better option, which for this particular scenario, Azure Cognitive Services does not allow you to do.

That’s not to say Azure Cognitive Services does not allow you to train custom models. Some scenarios like language understanding and image classification have training capabilities. The difference is, training custom models is not the default for Azure Cognitive Services. Instead, you are encouraged to consume the pretrained models. Conversely, with ML.NET, for training purposes, you’re always training custom models using your data.

Online vs Offline

By default, Azure Cognitive Services requires some form of internet connectivity. In scenarios where there is strong network connectivity, this is not a problem. However, for offline or air-gapped environments, this is not an option. Although in some scenarios, you can deploy your Azure Cognitive Services model as a container, therefore reducing the number of requests made over the network, you still need some form of internet connectivity for billing purposes. Additionally, not all scenarios support container deployments therefore the types of models you can deploy is limited.

While Azure in general makes sure to responsibly protect the privacy and security of users data in the cloud, in some cases whether it’s a regulatory or organizational decision, putting your data in the cloud may not be an option. In those cases, you can leverage Azure Cognitive Services container deployments. Like with the offline scenario, the types of models you can deploy is limited. Additionally, you most likely would not be training custom models since you wouldn’t want to send your data to the cloud.

ML.NET is offline first, which means you can train and deploy your models locally without ever interacting with a cloud environment. That being said, you always have the option to scale your training and consumption by provisioning cloud resources. Another benefit of being offline first is, you don’t have to containerize your application in order to run it locally. You can take your model and deploy it as part of a WPF application without the additional overhead of a container or connecting over to the internet.

Do you have some practical use cases for using ML.NET in business apps today?

Definitely! If you have a machine learning problem, the framework you use is fairly agnostic as long as the scenario is supported. Since ML.NET supports classical machine learning as well as deep learning scenarios, it practically can help solve a wide variety of problems. Some examples include:

  • Sentiment analysis
  • Sales forecasting
  • Image classification
  • Spam detection
  • Predictive maintenance
  • Fraud detection
  • Web ranking
  • Product recommendations
  • Document classification
  • Demand prediction
  • And many others…

I would suggest for users interested in seeing how companies are using ML.NET in production today, visit the ML.NET customer showcase.

What kind of stuff do you work on over at your Twitch stream? When is it?

Currently I stream on Monday and Wednesday mornings at 10 AM EST. My stream is a live learning session where folks sometimes drop in to learn together. On stream, I focus on building data and machine learning solutions using .NET and other technologies. The language of choice on stream is F# though I don’t strictly abide by that and will use what works best for getting the solution working.

Most recently, I built a deep learning model for malware detection using ML.NET. I’ve been trying to build .NET pipelines with Kubeflow, a machine learning framework for Kubernetes on stream as well. I’ve had some trouble with that, but that’s what the stream is about, learning from mistakes.

Inspired by a reddit post which asked how to get started with data analytics in F#, I’ve started working on using F# and various .NET tools like .NET Interactive to analyze the arXiv dataset.

If any of that sounds remotely interesting, feel free to check out and follow the channel on Twitch. You can also catch the recordings from previous streams on YouTube.

What is your one piece of programming advice?

Just do it! Everyone has different learning styles, but I strongly believe no amount of videos, books or blog posts compare to actually getting your hands dirty. It can definitely be daunting at first, but no matter how small or basic the application is, building things is always a good learning experience. Often there is a lot of work that goes into producing content, so end users typically get the polished product and the happy path. When you’re not sure of where to start or would like to go more in depth, these resources are excellent. However, once you stray from that guided environment, you start making mistakes. Embrace these mistakes because they’re a learning experience.

Check out the full interview with Luis at my site.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 Blazor

🚀 .NET Core

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Use CSS isolation in your Blazor projects ]]> https://www.daveabrock.com/2020/09/10/blazor-css-isolation/ 608c3e3df4327a003ba2fe5a Wed, 09 Sep 2020 19:00:00 -0500 I’m excited to see Blazor now supporting CSS isolation—also known as scoped CSS.

This post discusses how to use CSS isolation with the latest preview bits, a feature adored by those in the Angular and Vue space, and for good reason—once you have it, you’ll soon wonder how you ever went without it.

This post covers the following topics.

Prerequisites

Before we get started, make sure you’ve installed the latest preview bits. To do this, install the latest .NET 5 SDK.

Also, you’ll need to create a Blazor app (either Server or WebAssembly is fine) using the tooling of your choice. For me, the quickest way to get started is with the .NET CLI:

dotnet new blazorwasm -o CssIsolationApp
cd CssIsolationApp
dotnet run

Of course, you can use Visual Studio tooling as well—do whatever works for you!

Once you verify the app is up and running, you’ll be ready to go.

The problem

The beauty of Blazor is in its component model. With components, you get a self-contained “chunk” of your UI that allows you to share and reuse them across your projects (not to mention with shared class libraries). Until Blazor CSS isolation came along, using CSS with your components went against a lot of that, which can lead to a frustrating experience. Let’s walk through an example to explain why.

In the generated sample Blazor app, we have three pages: Home, Counter, and Fetch data. With even a basic knowledge of CSS concepts, we know that if we do something like this in wwwroot/css/app.css, the site’s global CSS…

h1 {
    color: brown;
    font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
}

…we see that this change applies to every h1 on every page in our project. So, as a result, if we want a different heading style on every page—like a crazy person!—we need to differentiate them somehow:

.hello-world-heading {
    color: brown;
    font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
}

.counter-heading {
    color: aquamarine;
    font-family: 'Times New Roman', Times, serif;
}

.fetch-data-heading {
    color: blueviolet;
    font: 'Comic Sans';
}

Not only that, I need to go into each of my pages and apply my CSS classes to it.

For Pages/Index.razor:

<h1 class="hello-world-heading">Hello, world!</h1>

For Pages/Counter.razor:

<h1 class="counter-heading">Counter</h1>

For Pages/FetchData.razor:

<h1 class="fetch-data-heading">Weather forecast</h1>

After you do that, all the styles should be applied—and, as an added bonus, you’ve stopped taking me seriously since I told you to use Comic Sans.

Even using a basic example, you’re already seeing the pain points. Because you’re styling defensively to avoid collisions between components and other libraries, you’re left with a bloated file with no way to track them to your components. You’re essentially working without namespaces. Can you imagine? We would never do this in C#, but this is what we’re doing with our CSS.

In addition to a terrible developer experience, you’re also adding bloat to your application by loading styles when they aren’t referenced.

In your browser’s developer tools, you can verify this quite easily—I’m using Show Coverage from Chrome Dev Tools. As you can see from the screenshot below, I’m not even using 40% of my styles. You can see the heading styles from the other components are being loaded, even though we know they aren’t used.

A lot of unused styles

There are ways to get around this by bringing in external libraries and tools from both inside and outside of the Blazor ecosystem. If it works for you, great—but I ask: isn’t the promise of one toolchain a big reason why you’re using Blazor?

With a knowledge of our pain points, let’s add CSS isolation to our sample application.

Use CSS isolation

It’s quite easy to bind your CSS to your component. To do this, inside of your Pages directory (and not with the global CSS file), add new files with the format MyComponent.razor.css. So, add these three files to the project:

  • Index.razor.css
  • Counter.razor.css
  • FetchData.razor.css

Once you do that, cut and paste the styles you created for the individual headings into the individual files. If you run your project again, you’ll see that everything still works. The difference here is that everything is scoped to the single component—and you don’t even need to add a reference!

If you happen to run the coverage test again, you’ll see that the styles for your other components aren’t being loaded with your existing component.

Without worrying about conflicting with other components or libraries, we can change our CSS styles to simple h1’s. For example, in our Index component, we can change this style:

.hello-world-heading {
    color: brown;
    font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
}

To a simple h1:

h1 {
    color: brown;
    font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
}

I’m incredibly happy with the simplicity of this solution. Many folks have asked about using @css blocks in components, but it involved integrating a CSS parser into the Razor compiler—which appears to be quite expensive.

How does this magic work?

For this to work, Blazor appends a special attribute to your CSS classes, which binds your classes to the specific component.

A lot of unused styles

If you’re curious, you can head over to the Network panel of your favorite browser’s developer tools. You’ll see that Blazor loads in a MyProject.styles.css file, where MyProject is the name of your project. Here, you’ll see the styles for all our components, each referenced by that unique ID—as a bonus, it’s super helpful to have the component’s name commented for us.

A lot of unused styles

This styles.css file is the result of bundling all your isolated CSS files for your project into a single output. Don’t take my word for it: if you view the <head> on your page, you’ll see the reference that’s generated for you.

<link href="MyProject.styles.css" rel="stylesheet">

Because each project has a different styles.css file, they’ll need to know about each other somehow. This is accomplished using CSS imports. In this example, you’ll see my project references a shared Razor component.

A lot of unused styles

Armed with this knowledge, if we take a larger view of the DOM it’ll make a lot more sense. The new h1 class refers to our Index component (b-dew6pvofzw), and the other styles are brought in from the Shared/MainLayout component (b-vtqmmfsxlh).

A lot of unused styles

This pattern has worked well with Vue and there was no sense in reinventing the wheel.

Reminder: CSS isolation is a build-time step

To support isolation, Blazor rewrites all the CSS selectors during the build process. This makes prerendering a snap, since there’s no reliance on existing .NET or JavaScript code. On the other side of the coin, this means you’ll need to recompile to see any new changes—if you’re used to saving a CSS change and seeing your changes immediately, it’s a drag.

How to work with child components

Call me a mind reader, but you’re probably wondering how this works with child components. Thanks so much for asking. There’s only one way to find out.

In your Pages directory, add a new component and call it MyChild.razor and add the following:

@page "/child"

<h1>I'm a child component it's true</h1>

<p>No, seriously.</p>

Finally, drop the (child) component in our Index.razor (parent) component. Your Index.razor component will look like this now:

@page "/"

<h1>Hello, world!</h1>

Welcome to your new app.

<MyChild />

Fire up your app and see what happens. We notice that, by default, scoped styles do not apply to child components. The styles in your *.razor.css files only get applied to the rendered output of that specific component.

A lot of unused styles

Don’t worry, though: we can cascade styles down to child components without the need for a new component-specific CSS file. We’ll do this with a ::deep combinator in our CSS. Change the contents of Index.razor.css to the following:

::deep h1 {
    color: brown;
    font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
}

Fire up your app and see that … it doesn’t work. Because of how the markup is structured, Blazor can’t determine the relationship between the parent component and the child component. Surround the markup with a <div> tag, and it’ll work:

@page "/"

<div>
    <h1>Hello, world!</h1>

    Welcome to your new app.

    <MyChild />
</div>

Now, it works great. We’re able to have our child components inherit styles from our parent component.

A lot of unused styles

If we look at our attribute:

A lot of unused styles

Blazor identifies the child style as “belonging” to the parent component in scoped.styles.css.

A lot of unused styles

Integrate with your favorite preprocessors

You may be leveraging your own CSS preprocessor. A popular preprocessor, like SASS, makes the writing of CSS more enjoyable with support for things that CSS doesn’t provide out of the box—like variables, nesting, modules, mixins, and inheritance.

In an effort to make things more generalized and extensible (and, ahem, not to mention shipping this on time), the Blazor CSS isolation feature does not directly offer CSS preprocessor support—but it doesn’t need to. For whatever tool you’re using, you just need to ensure that the preprocessor compiles to CSS to your MyComponent.razor.css file before the Blazor build step occurs. This allows you to be flexible: you can continue using existing tools like Webpack or one of the several .NET tools available, like Delegate.SassBuilder. Let’s do a quick demo using Delegate.SassBuilder.

First, go out and get SassBuilder in one of the many ways available to you (NuGet Package Manager, .NET CLI, Package Manager Console). For me, I’ll just add it to my project file and let the restore process take over. Add the reference to your existing <ItemGroup> packages.

<ItemGroup>
    <PackageReference Include="Delegate.SassBuilder" Version="1.4.0" />
</ItemGroup>

In your Pages directory, add a new SASS file. Let’s call it Index.razor.scss. It’ll be placed alongside the CSS file. You don’t need to touch the CSS file—our changes will be compiled to this file.

In Index.razor.scss, have fun with some variables (we changed our color from brown to red to validate things are working):

$color: red;
$font: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;

h1 {
    color: $color;
    font-family: $font;
}

During the build, the .scss file is compiled to Index.razor.css and we see that our changes are in place.

A lot of unused styles

Disable automatic bundling

If you have a process that works for you, fantastic. If you want to opt-out of how Blazor publishes and loads scoped files at runtime, you can disable it by using an MSBuild property. As mentioned in the GitHub issue, this means it’s your responsibility to grab the scoped CSS files from the obj directory and do the required steps to publish and load them during runtime.

If you’re good with that, add the DisableScopedCssBundling MSBuild property to your project file.

<PropertyGroup>
  <DisableScopedCssBundling>true</DisableScopedCssBundling>
</PropertyGroup>

Wrap up

In this post, we reviewed the new CSS isolation feature for Blazor. We discussed its benefits, the problems it solves, how to use it, and how you can pass styles to child components. We also talked about how to use CSS isolation with preprocessors and how to disable automatic bundling.

Thanks to the popularity of this post, it got turned into official Microsoft ASP.NET Core documentation!

If you have any comments or feedback, please let me know by commenting or connecting on Twitter.

]]>
<![CDATA[ The .NET Stacks #15: The final preview and ML.NET with Luis Quintanilla ]]> https://www.daveabrock.com/2020/09/05/dotnet-stacks-15/ 608c3e3df4327a003ba2fe59 Fri, 04 Sep 2020 19:00:00 -0500 Welcome to the start of another week—I hope you are well and safe. We had an insanely busy week in the .NET world, so let’s get to it.

It’s the final … preview

Are you ready for .NET 5? This week, Microsoft announced the release of .NET 5 Preview 8 (and also ASP.NET Core and EF 5 updates). This means that .NET 5 is more-or-less feature complete with the exception of bug fixes. Up next, we have two go-live release candidates, and then the official release in November.

You’ll want to hit up the links (and GitHub) for all the gory details, but here’s a quick recap of what’s coming in .NET 5 from a high level:

We discussed this a few weeks back, but for the preview itself, the ASP.NET Core side is jam-packed with Blazor improvements and capabilities. There’s CSS isolation, lazy loading, UI focus, and more. From the EF 5 side, there’s table-per-type mapping, table-valued functions, and more—and, as mentioned last week, many-to-many is now in the daily builds.

Speed up runtime with C# source generators

While not directly linked to C# 9, I’m getting excited about C# source generators (I wrote a “first look” post back in May).

These generators are basically code that runs during compilation and can produce additional files that are compiled together with the rest of your code. It’s a complication step that generates code for you based on your code.

Here’s a big benefit that I wrote about:

If you’ve ever leaned on reflection in your projects, you might begin to see many use cases for these solutions—C# source generators provide a lot of advantages that reflection currently offers and few, if any, drawbacks. Reflection is extremely powerful when you want to query properties and attributes you don’t know about when you typically compile. Of course, getting type information at runtime can incur a large performance cost, so offloading this to compilation is definitely a game-changer in C#.

This week, Microsoft announced some new C# source generator samples. You can also check out the design document and a source generators cookbook. If you’ve tried it, let me know your thoughts!

Dev Discussions: Luis Quintanilla

Machine learning is a fascinating world and, to many, a complicated one. As .NET developers, we definitely see the benefit in training our data but between the learning curve and using other languages like Python for machine learning—a language .NET devs might not be familiar with—ML is often sent to a developer’s “I should look into that sometime” queue.

That changed in 2018, when Microsoft launched ML.NET—a free, open source, x-plat machine learning framework for .NET. With ML.NET, you can use your favorite languages like C# or F# to work with your custom machine learning models. The idea is to meet you where you are and make ML more accessible.

There’s no one better to talk to about this than Luis Quintanilla. Luis has been with ML.NET since the beginning and was eventually scooped up by Microsoft to work on the docs for ML.NET.

Luis Quintanilla

What made you focus on ML.NET over other development tech?

I could write an entire essay on why ML.NET but all of the reasons can be summarized in a single word, .NET. Now, to expand on that, here are a few reasons why I enjoy ML.NET so much!

Languages

Though not unique to .NET, I like statically-typed languages. I’m sure many of the readers are able to build their applications and successfully run them without errors on the first try 😉. That, however, is usually not my experience. Therefore, I prefer catching as many errors as possible at compile time. Another reason I like types is they provide a way of documenting your code.

This is of extreme importance when working in data science and machine learning scenarios. Although ultimately the data used by machine learning algorithms to train models is encoded as numbers, knowing your data schema and checking it at compile time may help reduce the number of errors in your code as you transform your data.

Lately I’ve been doing more F# development and the more I use it, the more I like it. F# for me provides a nice balance between Python and C#. F# gives you the productivity and succinctness of a language like Python, while still having the compiler and many other neat features at your disposal.

Runtime

The .NET runtime is fast and performant. This is important in two scenarios, training machine learning models and deploying them. A good part of training machine learning models involves performing operations on vectors and matrices. .NET provides Single Instruction Multiple Data (SIMD) enabled types via the System.Numerics  namespace. ML.NET leverages these types where possible to increase the throughput of the training operations making training fast and efficient.

Tooling

.NET has world class tooling across the board and you can’t go wrong with any of your choices. Visual Studio is an excellent IDE packed with tons of functionality to help developers be more productive. Alternatively, another great IDE for .NET is Jetbrains Rider. If you’re looking for a more lightweight development environment, you can also use Visual Studio Code. When working with F#, you can use the Ionide extension which makes F# development a pleasant experience.

Data science and machine learning workflows are very experimental. This means that you sometimes may want to have an interactive environment where you get feedback in near real-time of the outputs generated by your code. You’d also like a way to visualize your data inline. Within the data science community, a popular interactive computing environment is Jupyter Notebooks. You can leverage this interactive environment in .NET through .NET Interactive, which provides among many things, a kernel for you to run .NET code interactively.

Extensible

Although .NET is great, a large portion of the data science and machine learning space is predominantly made up of libraries and frameworks built in Python. That however does not limit ML.NET because it is extensible. ML.NET supports working with TensorFlow and Open Neural Network Exchange (ONNX) models. TensorFlow is a popular platform for building machine learning models. Using TensorFlow.NET, a set of C# bindings for TensorFlow, users can train and deploy TensorFlow models with ML.NET. ONNX is an open format built to represent machine learning models. This means that you can train a model in other popular tools and frameworks like Azure Custom Vision, PyTorch, Scikit Learn. Then, you can export or convert that model to ONNX and consume it using ML.NET.

Open Source & Community

ML.NET, like .NET, is open source. This allows for the community to collaborate and contribute to it. Users have various ways of contributing to ML.NET, whether it’s raising issues, updating documentation or submitting pull requests, they’re all valuable contributions that only help make the framework that much better for everyone to use.

Correct me if I’m wrong, but I believe a big mission of ML.NET is making machine learning accessible—that is, I shouldn’t have to be an expert in machine learning to do it in .NET. Even still: how much should I know before I get started?

That’s right! ML.NET provides many ways of interacting with it depending on what you’re most comfortable with. The easiest way to get started is by using the tooling. The tooling provides a low-code way of training and consuming ML.NET models. If you prefer a graphical user interface, you can try Model Builder, a Visual Studio extension that guides you through the steps involved in training a machine learning model. As long as you have a general sense of the problem you’re trying to solve (classify text, predict a number, categorize images) and you have a dataset, Model Builder takes care of the rest.

Alternatively, if you prefer working on the command line you can use the ML.NET CLI, a .NET command line tool for training ML.NET model models and generating consumption code. The idea is very much the same as Model Builder, except now you interact with the tooling via the command line. The CLI is also a great choice for Machine Learning Operations (MLOps) scenarios where model training and deployment is done as part of a continuous integration (CI) or continuous deployment (CD) pipeline.

For folks who want more control, prefer a code-first approach, or are more familiar with machine learning concepts, there’s other ways of using ML.NET. One is with the ML.NET Automated ML (Auto ML) API. The AutoML API is leveraged by the tooling to try to find the “best” model. The best model for your problem depends on many factors such as the quantity and distribution of your data and time to train. Therefore, it helps to try different algorithms with different parameters.

If you want full control over your machine learning pipeline, you can use the ML.NET API. The API provides you with direct access to data loaders, transformations, trainers, and prediction components that you can configure as needed to solve your problem.

One of the nice things is, none of the ways of using ML.NET is mutually exclusive. You can start off with the tooling to bootstrap the model training process and from there use the ML.NET API to make further refinements. In the end, it’s all about choice and depending on your experience with machine learning and preferred workflow, there’s an option for you.

Check out the full interview with Luis at my site.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

Three community standups this week: ML.NET joins the club, Desktop runs through WinForms innovations and XAML tooling, and ASP.NET discusses Razor tooling).

😎 Blazor

🚀 .NET Core

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Dev Discussions - Luis Quintanilla (2 of 2) ]]> https://www.daveabrock.com/2020/09/05/dev-discussions-luis-quintanilla-2/ 608c3e3df4327a003ba2fe58 Fri, 04 Sep 2020 19:00:00 -0500 This is the full interview from my discussion with Luis Quintanilla in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today!

Machine learning is a fascinating world and, to many, a complicated one. As .NET developers, we definitely see the benefit in training our data but between the learning curve and using other languages like Python for machine learning—a language .NET devs might not be familiar with—ML is often sent to a developer’s “I should look into that sometime” queue.

That changed in 2018, when Microsoft launched ML.NET—a free, open source, x-plat machine learning framework for .NET. With ML.NET, you can use your favorite languages like C# or F# to work with your custom machine learning models. The idea is to meet you where you are and make ML more accessible.

There’s no one better to talk to about this than Luis Quintanilla. Luis has been with ML.NET since the beginning and was eventually scooped up by Microsoft to work on the docs for ML.NET. Luis had so much great stuff to share that we’ll split this interview up into two parts. Last time, we talked about his path to Microsoft, the value of ML.NET, and how to get started. Today, we’re talking about using ML.NET over something like Azure Cognitive Services, use cases for ML.NET, and more.

Luis Quintanilla

Where is the dividing line for when I should use machine learning, or use Azure Cognitive Services?

This is a really tough one to answer because there’s so many ways you can make the comparison. Both are great products and in some areas, the lines can blur. Here are a few of them that I think might be helpful.

Custom Training vs Consumption

If you’re looking to add machine learning into your application to solve a fairly generic problem, such as language translation or identifying popular landmarks, Azure Cognitive Services is an excellent option. The only knowledge you need to have is how to call an API over HTTP. Being able to work via HTTP also provides you with flexibility over what you use to make the requests to the API. If you want to run your machine learning workflow with Azure Cognitive Services via cURL as part of a background Cron job, that’s perfectly acceptable.

Azure Cognitive Services provides a set of robust, state-of-the-art, pretrained models for a wide variety of scenarios. However, there’s always edge cases. Suppose you have a set of documents that you want to classify and the terminology in your industry is rare or niche. In that scenario, the performance of Azure Cognitive Services may vary because the pretrained models most likely have never encountered some of your industry terms. At that point, training your own model may be a better option, which for this particular scenario, Azure Cognitive Services does not allow you to do.

That’s not to say Azure Cognitive Services does not allow you to train custom models. Some scenarios like language understanding and image classification have training capabilities. The difference is, training custom models is not the default for Azure Cognitive Services. Instead, you are encouraged to consume the pretrained models. Conversely, with ML.NET, for training purposes, you’re always training custom models using your data.

Online vs Offline

By default, Azure Cognitive Services requires some form of internet connectivity. In scenarios where there is strong network connectivity, this is not a problem. However, for offline or air-gapped environments, this is not an option. Although in some scenarios, you can deploy your Azure Cognitive Services model as a container, therefore reducing the number of requests made over the network, you still need some form of internet connectivity for billing purposes. Additionally, not all scenarios support container deployments—therefore, the types of models you can deploy is limited.

While Azure in general makes sure to responsibly protect the privacy and security of user data in the cloud, in some cases—whether it’s a regulatory or organizational decision—putting your data in the cloud may not be an option. In those cases, you can leverage Azure Cognitive Services container deployments. Like with the offline scenario, the types of models you can deploy is limited. Additionally, you most likely would not be training custom models since you wouldn’t want to send your data to the cloud.

ML.NET is offline first, which means you can train and deploy your models locally without ever interacting with a cloud environment. That being said, you always have the option to scale your training and consumption by provisioning cloud resources. Another benefit of being offline first is, you don’t have to containerize your application in order to run it locally. You can take your model and deploy it as part of a WPF application without the additional overhead of a container or connecting over to the internet.

Cost

Cost is always a tricky thing to talk about because what exactly do you account for when calculating cost? With Azure Cognitive Services, you pay for your consumption. While there are very different tiers you can subscribe to depending on your usage and the free tier is fairly generous, you always pay for what you use. Also, because all the resources are managed for you, you don’t have to spend time maintaining any of them. Because most of the models you use are pretrained, it also means you don’t have to spend your time or resources training a model.

ML.NET is always free—both the library and the tooling. Since the resources for training and deployment are not managed for you, you have to think about that. However, if your machine learning model is deployed alongside your existing applications, perhaps the management, resources, and overhead may be negligible. In terms of training, since you’re always training a ​custom model, the time and resources devoted to that have to be taken into account. The benefit of that, though, is that your model is fine tuned to your specific problem.

Revisiting the offline scenario, if your model is deployed in an offline environment or perhaps inside of an existing WPF application, your costs at that point are practically zero because any usage of the model isn’t costing you anything extra. So again, there’s many ways you can look at cost and it all depends on what you decide to take into account.

Do you have some practical use cases for using ML.NET in business apps today?

Definitely! If you have a machine learning problem, the framework you use is fairly agnostic as long as the scenario is supported. Since ML.NET supports classical machine learning as well as deep learning scenarios, it practically can help solve a wide variety of problems. Some examples include:

  • Sentiment analysis
  • Sales forecasting
  • Image classification
  • Spam detection
  • Predictive maintenance
  • Fraud detection
  • Web ranking
  • Product recommendations
  • Document classification
  • Demand prediction
  • And many others…

I would suggest for users interested in seeing how companies are using ML.NET in production today, visit the ML.NET customer showcase.

What kind of projects are you hacking away at now?

My interests are too many. If only there was enough time to do them all! Most recently we’ve started doing the Machine Learning Community Standup. This is a place where folks who are interested in machine learning within the .NET ecosystem can come and meet other members of the community, learn what’s new in the space, and ask questions. You can tune in every other Wednesday to check that out. If you’re working on projects with ML.NET or other .NET machine learning libraries, we’d love to hear from you!​

Also, I mentioned it earlier but folks should stay tuned for the next Virtual ML.NET Conference. The first one took place earlier this year and it was a great success. Not only was engagement high during the conference, but interest and excitement has increased since then—which can only mean good things for ML.NET. For this next version, instead of workshops and presentations, it’ll be hands-on in the form of a hackathon. Again, at the moment there aren’t many details, but make sure to stay in touch on Twitter or join the Slack community to get the latest updates.

Personally, I’ve been dabbling in reinforcement learning, which is another form of artificial intelligence. Though many of the concepts don’t transfer over, there are some areas where my prior knowledge is applicable, especially when it comes to the use of neural networks as part of deep reinforcement learning.

Much of that work takes me away from .NET, which I always try to get a healthy dose of. As a result, I’ve been doing more F# development. I still have a ways to go, but I really like the language and always try to get better at it.

For throwaway projects and learning exercises, I’ve been playing around with Racket. In some ways, my learnings from F# have made me a better Lisp developer. I’ve always been fond of the language considering for the longest time it was THE language for artificial intelligence. Though I’m far from building Lisp bindings for ML.NET (which I think is feasible), it’s always fun to get outside of your comfort zone.

Of course, ML.NET is never far away either and I’m always trying to find new ways of creatively using ML.NET.

From time to time, I’ll post some of the things I’m experimenting with on my blog so feel free to check it out.

What kind of stuff do you work on over at your Twitch stream? When is it?

Currently I stream on Monday and Wednesday mornings at 10 AM EST. My stream is a live learning session where folks sometimes drop in to learn together. On the stream, I focus on building data and machine learning solutions using .NET and other technologies. The language of choice on stream is F# though I don’t strictly abide by that and will use what works best for getting the solution working.

Most recently, I built a deep learning model for malware detection using ML.NET. I’ve been trying to build .NET pipelines with Kubeflow, a machine learning framework for Kubernetes, on the stream. I’ve had some trouble with that, but that’s what the stream is about: learning from mistakes.

Inspired by a reddit post which asked how to get started with data analytics in F#, I’ve started working on using F# and various .NET tools like .NET Interactive to analyze the arXiv dataset.

If any of that sounds remotely interesting, feel free to check out and follow the channel on Twitch. You can also catch the recordings from previous streams on YouTube.

What is your one piece of programming advice?

Just do it! Everyone has different learning styles, but I strongly believe no amount of videos, books or blog posts compare to actually getting your hands dirty. It can definitely be daunting at first, but no matter how small or basic the application is, building things is always a good learning experience. Often there is a lot of work that goes into producing content, so end users typically get the polished product and the happy path. When you’re not sure of where to start or would like to go more in depth, these resources are excellent. However, once you stray from that guided environment, you start making mistakes. Embrace these mistakes because they’re a learning experience.

You can connect with Luis Quintanilla at his blog or on Twitter.

]]>
<![CDATA[ NDepend: Boost Your Team's Code Quality ]]> https://www.daveabrock.com/2020/09/04/boost-code-quality-ndepend/ 608c3e3df4327a003ba2fe57 Thu, 03 Sep 2020 19:00:00 -0500 Have you used NDepend before? If you’ve been working on .NET for some time, you may have either used it or heard of it—I believe it’s been around since 2007 or so.

If you aren’t familiar, NDepend is a robust static code analysis tool that allows you to gain a concrete understanding of your project’s complexity, technical debt, associations, dependencies, and more. You can easily integrate NDepend into your daily workflow with a Visual Studio extension and it also has extensions for your build pipelines as well.

I gave NDepend 2020.1 a test spin and have been using it the last few weeks. I’ll spend the rest of the post writing about my experiences.

I couldn’t possibly go through all the features and functionality that NDepend offers, so this post will show off what caught my eye. You can take a look at the docs for the full treatment.

Disclaimer: I am reviewing NDepend with a free review license.

This post covers the following topics.

Attach NDepend to your .NET solution

Once you install NDepend, the easiest way to get started is to attach NDepend to an existing .NET solution in Visual Studio.

In your version of Visual Studio, enter ndepend in your global search bar at the top, and click Analyze a VS solution. Browse to a solution file you want NDepend to analyze, then click OK. At the next screen, select the assemblies you want analyzed, and let NDepend loose.

The first time you analyze, a beginner-friendly screen will ask you how you want to proceed. Click View NDepend Dashboard.

go to NDepend dashboard

Review the NDepend dashboard

The dashboard gives you a nice view of overall tech debt, quality gates, and issue and rule estimation. My sample app needs some work but … in my career, I’ve definitely seen worse.

go to NDepend dashboard

You can drill into the dashboard to get details about any violations.

Technical debt severity levels

If you’re wondering about the NDepend severity levels, here’s some guides:

  • Info - a small improvement can be made
  • Minor - a type of warning that won’t have a significant impact if not addressed
  • Major - should be fixed quickly but can wait until the next run
  • Critical - these issues shouldn’t move to production
  • Blocker - it must be fixed

Once you improve your code, you can rerun the analysis again and NDepend will show you how you improved from the last baseline.

Visualizing complexity

It’s been a few years since I last worked with NDepend, and a welcome new addition I noticed is the visualization of cyclomatic complexity. While it doesn’t tell the whole story, high complexity numbers are typically bad news for code quality and testability.

Here you’ll see a nice visualization view where you can hover over “trouble spots” in your application. In the top-left of the image, you’ll see details about the offending method (I hovered over the redness).

visualize code complexity

This capability isn’t just for cyclomatic complexity—you can do it for documentation (comments), test coverage, and more. It’s a great way to see the high-level state of your application.

I can personally see a lot of value in using the code coverage visualization, as it easily shows which components are in need of more coverage.

Understand dependencies

NDepend ships with a dependencies view that allows you to visualize the relationships between your projects. It’s an interactive view, where you can click and hover over dependencies, to see how tightly or loosely coupled your dependencies are.

visualize code complexity

If you tell your boss something is taking a while because it’s a “bowl of spaghetti” and he or she wants evidence, here’s where you can get it.

Query code with the CQLinq syntax

NDepend offers Code Query LINQ (CQLinq) to query .NET code through the use of LINQ queries. If you want greater control of the metrics, or want a one-off result, it’s a great capability. You can use any C# LINQ syntax.

Let’s say I want to find any methods where the cyclomatic complexity is greater than 10.

The first thing I notice is the Intellisense-like previews as I type:

cqlinq

Now, I can execute the query like this:

from m in Methods where m.CyclomaticComplexity > 10
select new { m, m.CyclomaticComplexity }

I don’t need to click a Run button or anything—the results will populate automatically.

cqlinq

I can run this just once, or even save my queries for later use. As a C# developer you’re likely familiar with the LINQ syntax, so this offers a great and quick way for you to grab just the data you need.

Integrate NDepend with your continuous integration pipeline

To make the greatest impact to your team, you can use NDepend with your CI tools—it supports any major CI tool like Azure DevOps/TFS, TeamCity, Jenkins, SonarQube, and more. You will need to purchase a Build Machine Edition or Azure DevOps/TFS Edition license for this capability.

If you are using something like Azure DevOps, you can use an extension that contains a build task. This will produce the dashboard at a moment’s notice, just like we saw in our local Visual Studio.

Much like how you can enforce quality gates in your CI tooling for code coverage and coding conventions, you can also do for NDepend. The extension ships with a dozen quality gates for things such as technical debt amount or issues with a particular severity—for example, you could enforce a gate where developers don’t push any code with Major or Critical issues. The quality gates are also C# LINQ queries, so you can easily tweak existing gates or create new ones.

On top of enforcing code quality across the team, you have the advantage of the metrics in one place (and not scattered across dev machines) and also can report progress up the chain, if needed.

This is a powerful capability and if you’re paying for NDepend already, you should give this serious consideration. This will morph your use of NDepend from “we should really fix those issues” to “we’ve set a standard to not introduce anything to our codebase with these issues”—a huge difference.

Take a look at the NDepend CI documentation for more details.

Wrap up

In this article, I took a test drive through the latest version of NDepend. Once we attached NDepend to a sample solution, we discussed the NDepend dashboard, complexity visualization, the dependency graphs, querying code with the CQLinq syntax, and reviewed its CI capabilities.

I was happy to get acquainted with NDepend once again and take a look at the improvements it has made over the years, and see that it’s still the best-in-class tool for .NET static analysis. If you have the budget for it, you can’t go wrong.

]]>
<![CDATA[ How To Not Hate JavaScript ]]> https://www.daveabrock.com/2020/09/02/how-to-not-hate-javascript/ 608c3e3df4327a003ba2fe56 Tue, 01 Sep 2020 19:00:00 -0500 “I hate JavaScript.”

Let’s get real: we’ve all said it. When we say these words in anger, what do we really hate about JavaScript?

It is node_modules? Is it Webpack? Is it the convoluted ecosystem which seems to change every three months? Is it npm or yarn, or package dependencies in general? Is it the lack of a standard build system or a standard library? Is it the fact that it’s dynamically and loosely typed, with almost no restrictions? Is it the inherent design flaws? The insecure nature? The pedantic framework wars?

While these are all valid gripes, let’s think about it another way: is a big reason why we sometimes say we hate JavaScript because we don’t understand its core concepts well enough?

I’ve experienced this journey first-hand. As a back-end C# developer, I was arrogant: with my years of experience as a software engineer, I thought with a lot of the constructs being similar, it wouldn’t take me long to master. JavaScript is so different from any other language, with intricacies all of its own—it took me years to understand this.

I got up to speed, rushed to use other frameworks, embraced TypeScript (as you should!), but still couldn’t stand JavaScript. My problem: I never took the time to learn and understand concepts where, if you don’t know or understand them, you’ll spend hours (sometimes days) on a bug, fix it thanks to some luck, and never truly understand the source of the problem—and, as a result, be subjected to an infinite loop of JavaScript incompetence.

After I began to understand why I despised JavaScript, I took action to learn more. In the last year or two, after filling in some gaps, I find JavaScript to be a somewhat pleasant experience. I’m not saying its perfect—what is?—but I understand why things are the way they are.

So, allow me to share with you a few things that, once I learned them, made me understand JavaScript so much better and made my experience a lot less frustrating. To you, some might be “you didn’t know this?” and some might be difficult to grasp—and that’s OK. Wherever you are in your journey, I hope this helps you.

By the end, if you dig deep into these concepts my hope is that you can appreciate and understand JavaScript a little more, for the good and the bad.

This post covers the following topics.

Understand execution context

To completely understand and work with advanced JavaScript concepts like closures, scopes, and hoisting, you need to understand how JavaScript’s execution contexts work. If you ever have trouble understanding why a variable is undefined when you do not want it to be, the execution context is a good place to start.

When talking about execution context, we need to understand two things: the global context and the function execution context.

The global context

The global context is the default context, where code resides that does not sit inside a function. The context contains two items at the beginning before you run any code at all: the global object (which is window in browser-based JS or global for Node.js) and the this variable—which is set to the global object.

The global context also sets up space in memory for our variables and functions, and assigns variables to undefined while putting function declarations in memory. This takes place before any code is run, and is called the creation phase.

If you look at this code:

var firstName = 'Dave';
var lastName = 'Brock';
var occupation = 'Software Developer';

function getDetails() {
    return {
        firstName: firstName,
        lastName: lastName,
        occupation: occupation
    };
};

My return object could use the short-hand object syntax, but I’m doing it this way for clarity.

After the code runs—we are now in the execution phase—the JS engine executes the code line by line and assigns variables to the values you specified. For example, if you log a variable between creation and execution, you should get back undefined.

console.log(firstName); // undefined
console.log(lastName); // undefined

var firstName = 'Dave';
var lastName = 'Brock';
var occupation = 'Software Developer';

function getDetails() {
    return {
        firstName: firstName,
        lastName: lastName,
        occupation: occupation
    };
};

This is what hoisting is: assigning variables undefined while being created. Our industry is full of using fancy words for such simple things. I hope this clears things up.

Functional execution context

The other context you need to know is the functional context (for, of course, functions). This is created whenever functions are called (or invoked, to be fancy). We’ll have one for every function, and since the global object (and execution context) are in place it doesn’t need to create it again. This context creates an arguments object and then, just like the global context, creates a this object, sets up space for variables and functions, and sets variables to undefined and puts any declarations in memory.

When a function is invoked, a new functional execution context is created for it and added to the call stack. After execution completes, it gets removed (popped).

For more details, check out Tyler McGinnis’s wonderful article.

Know the event loop

If I can pick one thing I wish I knew when I started working on JavaScript, it’s the event loop. How I would have loved for someone to find my bug as a junior engineer, pull me aside, and say: “Hey, stop what you’re doing and take the rest of the day to learn the event loop. Thank me later.”

Because the JavaScript engine is single-threaded, it’s vitally important that you understand this. Being single-threaded is actually not the worst thing, as it avoids a lot of concurrency issues—but you still need to understand how not to block the single thread you have at your disposal.

Here’s the job and purpose of the event loop in JavaScript: it looks at the call stack and runs anything that is currently on the stack. If it’s empty, it looks at the message queue, and pushes its contents onto the stack in order.

Because JS is single-threaded, it has one call stack. But what happens if something on the stack is taking forever and blocks things? For that we have browser APIs—we can offload things like setTimeout and DOM APIs to give the illusion we’re concurrent.

A common example: how is this code processed?

setTimeout(() => {
    console.log('oh hi!')
}, 5000)

When this happens, we call out to a browser API that is NOT part of the JavaScript runtime. When the five seconds completes, the API lets JavaScript know, and an item is added to the message queue, which is first in, first out. When the call stack is empty, JS takes the message, places it on the stack, and runs it.

This is an important distinction to make: when you invoke setTimeout you are not saying it’ll return in five seconds—you’re saying that is the minimum amount of time it will return, depending on what else is in the call stack.

Your knowledge of the call stack is vital as your code becomes more complex. These concepts weren’t clear until I watched the Philip Roberts talk, which is on YouTube. In my opinion, it’s well worth your time.

Know async and await

If you’re a regular reader here, you’re likely a C# developer—so I don’t need to tell you about how awesome the async and await paradigm is. The excitement is ever-present in the JavaScript community as well. From callbacks, to promises, and now async await, the JS async capabilities have come a long way.

However, like in C#, don’t let its simple syntax fool you into thinking you don’t have to be aware of how it all works. You can view async and await as a wrapper around the Promise infrastructure.

When you do something like this…

function sayHiToDave() { return "Hi, Dave" };

… if you execute it in your console, you’ll get back Hi, Dave. As you should.

Now, if you make it async:

function async sayHiToDave() { return "Hi, Dave" };

If you invoke this in your dev tools, you’ll see it returns a Promise. The return values will always be converted to promises.

promise

So, to consume the return value, you would do something like:

sayHiToDave().then(console.log);

Of course, what’s the fun of async without the await? Await avoids all this .then() work. It’ll pause on the line until the promise is completed or fulfilled, then will return the value you wanted.

So just understand async and await is mostly syntactic sugar over Promises, saving you from the .then() chaining they are famous for.

However, know the downsides: async/await looks synchronous. In fact, await blocks code execution until fulfillment time. Other tasks can run, but your code is blocked. If you are await’ing a lot of things, you can face a performance hit as each await is waiting for the previous one.

If this sounds like something you face, the wonderful MDN async/await article says: you can offset this by storing Promise objects in variables, then awaiting them all.

As with anything, know the drawbacks and try not to use async/await without knowing what’s really happening.

Know the big three array methods

There are so many array methods in JavaScript. For most of them, you should be able to reference the docs and apply your knowledge. But the three you must truly master and wrap your mind around are map(), filter(), and reduce(). These are used so frequently and are so important. They will take you far.

.map()

A lot of us know this one, but as a recap: the map() method creates a new array based on a previous array. Here, I would just like to send city names to a new array. The important thing to note with these methods is that a new array is created (and it does not update your existing array).

const cities = [
  { id: 1, name: 'Chicago', state: 'IL', population: 2693976 },
  { id: 2, name: 'Houston', state: 'TX', population: 2320268 },
  { id: 3, name: 'Minneapolis', state: 'MN', population: 453403 },
  { id: 4, name: 'Madison', state: 'WI', population: 258054 },
  { id: 5, name: 'San Antonio', state: 'TX', population: 1327407 }
]

// ['Chicago', 'Houston', 'Minneapolis', 'Madison', 'San Antonio']
const justCityNames = cities.map(cities => cities.name);

.filter()

To take it a step further, we can use filter() to create a new array based on some criteria. For example, let’s get back all cities with a population more than 1 million people.

const cities = [
  { id: 1, name: 'Chicago', state: 'IL', population: 2693976 },
  { id: 2, name: 'Houston', state: 'TX', population: 2320268 },
  { id: 3, name: 'Minneapolis', state: 'MN', population: 453403 },
  { id: 4, name: 'Madison', state: 'WI', population: 258054 },
  { id: 5, name: 'San Antonio', state: 'TX', population: 1327407 }
]

// ['Chicago', 'Houston', 'Minneapolis', 'Madison', 'San Antonio']
const bigCities = cities.filter(city => city.population > 1000000);

.reduce()

To be honest, .map and .filter aren’t too difficult to grasp—it’s reduce() that will hold the key to your glory (and your frustration, if you don’t understand it). While methods like map() and filter() make you another array, reduce() has greater ambitions. It says: “you give me an array, and I’ll transform it for you to whatever you want.” This can be an object, an array, an int, a calculation. Anything.

Let’s say we want to add up total population for all our cities using an add function:

function add(array) {
    return array.reduce((total, num) => {
        return total + num
    }, 0);
};

const cities = [
  { id: 1, name: 'Chicago', state: 'IL', population: 2693976 },
  { id: 2, name: 'Houston', state: 'TX', population: 2320268 },
  { id: 3, name: 'Minneapolis', state: 'MN', population: 453403 },
  { id: 4, name: 'Madison', state: 'WI', population: 258054 },
  { id: 5, name: 'San Antonio', state: 'TX', population: 1327407 }
]

const cityPopulations = cities.map(cities => cities.population);
const totalPopulation = add(cityPopulations); // 7053108

Our function takes two arguments: the first is invoked for every element, and the second is the initial value. In our case, the initial value is 0 (make sure to pass this to avoid NaN frustrations).

For each iteration, num will be what’s in the array—in our situation, the population of a city. With total, it will be initially 0, then whatever the previous city returned. Cool?

This is a simple example, and even this is a little trippy. My advice? Do reduce() calls until you can’t see straight, then do some some more. Pass in an array, transform into anything imaginable. Because once you master reduce, you rule the JavaScript world.

When to use map, filter, and reduce, in one sentence

Use map() when you are turning an array into another array, .filter() to turn an array into another array by filtering (or removing, most likely) elements, and reduce() to transform an array into something magical (specifically, not an array).

Understand that arrow functions aren’t just for conciseness

Starting with ES6, you can use arrow functions. Before ES6, here’s how we’d write our previous filter function:

const cities = [
  { id: 1, name: 'Chicago', state: 'IL', population: 2693976 },
  { id: 2, name: 'Houston', state: 'TX', population: 2320268 },
  { id: 3, name: 'Minneapolis', state: 'MN', population: 453403 },
  { id: 4, name: 'Madison', state: 'WI', population: 258054 },
  { id: 5, name: 'San Antonio', state: 'TX', population: 1327407 }
]

var bigCitiesOld = cities.filter(function(city) {
    return city.population > 1000000;
});

Instead, we can use an arrow function whose => offers an implicit return and make things a lot cleaner and easier. We also don’t need to manually type the function syntax. Check out this one-liner:

const bigCities = cities.filter(city => city.population > 1000000);

This is great and, for a lot of us, this completes our understanding of arrow functions. But when you made that subtle change in deleting the function syntax, you are also changing the context of this. And you definitely need to understand … this.

In short, arrow functions don’t have their own this value. When you use function() syntax, it receives a this value automatically, even when you don’t want it! As a result, before arrow functions, you’ve probably written a hack like this:

function addEverything(items) {
    var self = this;
    items.forEach(function(thing) {
        self.addAThing(thing)
    });
}

You need to do this var self = this garbage because your inner function doesn’t inherit this from the outer function—meaning this will be window or undefined if you don’t do the hack. You could also do .bind but the ugliness remains.

With arrow functions, you can just do this:

function addEverything(items) {
    items.forEach(thing => this.add(thing));
}

No hacks needed assuming you know that you’ll need to use the function() syntax for methods called using the dot operator (object.method). These functions receive this from whoever called it. For everything else, use arrow functions.

You’ll want to study Jason Orendorff’s ES6 In Depth: Arrow Functions piece until the proverbial light bulb goes off in your head.

Wrap up

In this post, we covered ways to understand JavaScript a little better. We worked through execution contexts, the event loop, async/await, array methods, and the nuances of arrow functions.

I hope you found this article useful. What are some JS pieces that boosted your confidence once you understood how they worked? Let me know in the comments!

]]>
<![CDATA[ The .NET Stacks #14: Checking in on NuGet changes, many-to-many in EF Core, community roundup, and more! ]]> https://www.daveabrock.com/2020/08/29/dotnet-stacks-14/ 608c3e3df4327a003ba2fe55 Fri, 28 Aug 2020 19:00:00 -0500 Happy Monday. Do you ever watch a satire and think it’s a documentary?

This week, we’ll:

  • Get excited about many-to-many in EF Core
  • Take a look at some current and upcoming NuGet changes
  • Check in on the community

Many-to-many in EF Core 5

So this is exciting: many-to-many support is now included in the Entity Framework Core daily builds, and the team spent this week’s community standup showing it off.

A big part of this work includes the concept of skip navigations, or many-to-many navigation properties. For example, here’s a basic model that assigns links in a newsletter (what can I say, it’s fresh on my mind) and is a good use case for many-to-many and is … heavily inspired and/or outright stolen from the GitHub issue:

public class Link
{
    public int LinkId { get; set; }
    public string Url { get; set; }
    public string Description { get; set; }

    public List<LinkTag> LinkTags { get; set; }
}

public class Tag
{
    public string TagId { get; set; }
    public string Description { get; set; }

    public List<LinkTag> LinkTags { get; set; }
}

public class LinkTag
{
    public int LinkId { get; set; }
    public Link Link { get; set; }

    public string TagId { get; set; }
    public Tag Tag { get; set; }

    public DateTime LinkCreated { get; set; }
}

So, to load a link and its tags, I’d need to use two navigation properties, and refer to the joining table when performing queries:

var linksAndTags
    = context.Links
        .Include(e => e.LinkTags)
        .ThenInclude(e => e.Tag)
        .ToList();

With many-to-many in EF Core 5, I can skip over the join table and use direct navigation properties. We can instead get a little more direct:

public class Link
{
    public int LinkId { get; set; }
    public string Description { get; set; }

    public List<Tag> Tags { get; set; } // Skips right to Tag
    public List<LinkTag> LinkTags { get; set; }
}

public class Tag
{
    public string TagId { get; set; }
    public string Description { get; set; }

    public List<Link> Links { get; set; } // Skips right to Link
    public List<LinkTag> LinkTags { get; set; }
}

Look how easy querying is now:

var linksAndTags 
    = context.Links
        .Include(e => e.Tags)
        .ToList();

In the standup, Arthur Vickers mentioned an important fact that’s easy to overlook: while the release timing is similar, EF Core 5 is not bound to .NET 5. It targets .NET Standard 2.1, so can be used in any .NET Core 3.x apps (and of course is available for use in .NET 5). Take a look at the docs to learn more about .NET Standard compatibility.

Checking in on NuGet changes

This week, the .NET Tooling community standup brought in the NuGet team to talk about what they’re working on. They mentioned three key things: UX enhancements to nuget.org, improvements to package compatibility in Visual Studio, and better README support for package authors.

The team has been working on improving the experience on nuget.org—in the last few weeks, they’ve blogged about how you can use advanced search and also view dependent packages. While I prefer to work with packages in my IDE, you can hit up nuget.org for a better view, but it’s hard to filter through the noise. The new filtering options are a welcome improvement: I can filter by dependencies, tools, templates, relevance, downloads, and prereleases. Whether I value stability or finding out what’s new, the experience is a lot better now.

I was most interested to hear about tooling support, as that’s how a lot of us use NuGet most. The team discussed upcoming changes to floating version support. Floating version support means you’re using the latest version of a range you specify: for example, if you say you want version 2.* of a package, and it rolled out 1.0, 2.0, and 2.1 versions, you’re using 2.1. Soon, you’ll be able to specify this in the Version field in the NuGet UI—preventing you from hacking away from the project file. As a whole, the Version field will allow more flexibility and not just be a drop-down.

Lastly, if you author NuGet packages, README links will no longer be a post-package upload but built right into the Visual Studio Package Properties. That will allow you to easily have your package documentation on nuget.org, and the NuGet UI in Visual Studio will have a link to it.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 Blazor

Jon Hilton compares Blazor and Vue, and also works with client-side Blazor.

🚀 .NET Core

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Dev Discussions - Luis Quintanilla (1 of 2) ]]> https://www.daveabrock.com/2020/08/29/dev-discussions-luis-quintanilla-1/ 608c3e3df4327a003ba2fe54 Fri, 28 Aug 2020 19:00:00 -0500 This is the full interview from my discussion with Luis Quintanilla in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today!

Machine learning is a fascinating world and, to many, a complicated one. As .NET developers, we definitely see the benefit in training our data but between the learning curve and using other languages like Python for machine learning—a language .NET devs might not be familiar with—ML is often sent to a developer’s “I should look into that sometime” queue.

That changed in 2018, when Microsoft launched ML.NET—a free, open source, x-plat machine learning framework for .NET. With ML.NET, you can use your favorite languages like C# or F# to work with your custom machine learning models. The idea is to meet you where you are and make ML more accessible.

There’s no one better to talk to about this than Luis Quintanilla. Luis has been with ML.NET since the beginning and was eventually scooped up by Microsoft to work on the docs for ML.NET. Luis had so much great stuff to share that we’ll split this interview up into two parts. Today, we’ll talk about his path to Microsoft, the value of ML.NET, and how to get started. Next week, we’ll talk about using ML.NET over something like Azure Cognitive Services, use cases for ML.NET, and more.

Luis Quintanilla

How did you get to Microsoft, and what do you do there these days?

Before joining Microsoft, I was consulting in the artificial intelligence space for companies in various industries. I was fortunate enough to work on a wide variety of projects using different technologies. Most of these technologies were not .NET technologies. At Build 2018, ML.NET was announced. I had done limited .NET Framework development in the past. Coincidentally, around the same time, .NET Core 2.1 was released so I thought it was a good time to jump in with both feet. I was immediately hooked and since I wanted a break from the technologies I used every day at work, I started using it in my spare time.

I found myself being productive with ML.NET and would document what I was experimenting with on my blog. Additionally, I gave several talks at local user groups and conferences on ML.NET. Around the same time, Microsoft was looking for someone to join the Microsoft Docs team to focus on ML.NET so I happened to naturally fit the role. I was fortunate enough to connect with the folks on the hiring team and the rest is history.

The past year and a half, while my focus has been ML.NET, I have recently expanded my responsibilities to also create content for Azure Machine Learning and .NET for Apache Spark. Like with ML.NET, I’ve had a great experience working with these technologies which are integral of data and machine learning workflows.

What made you focus on ML.NET over other development tech?

I could write an entire essay on why ML.NET but all of the reasons can be summarized in a single word, .NET. Now, to expand on that, here are a few reasons why I enjoy ML.NET so much!

Languages

.NET is made up of several fantastic languages, C#, F#, and Visual Basic.

Though not unique to .NET, I like statically-typed languages. I’m sure many of the readers are able to build their applications and successfully run them without errors on the first try 😉. That, however, is usually not my experience. Therefore, I prefer catching as many errors as possible at compile time. Another reason I like types is they provide a way of documenting your code.

Sometimes, it may be the case that you step away from your code for a day or two and have to refresh your memory. Types provide some form of annotation that at the very least provide you with a hint of what data types are used in functions and their overall logic. Annotations by themselves are useful, but having the compiler check those type annotations makes me feel extra safe. This is of extreme importance when working in data science and machine learning scenarios. Although ultimately the data used by machine learning algorithms to train models is encoded as numbers, knowing your data schema and checking it at compile time may help reduce the number of errors in your code as you transform your data.

With languages like Python, although you can do type checking and conversions at runtime as well as use type annotations and static type checkers, it’s not the default way of working with the language. In fact, I’ve noticed various Python libraries working to introduce type annotations. I’m sure this is no easy task since it’s all being done retroactively but in my opinion it’s a worthwhile effort in the long term.

Lately I’ve been doing more F# development and the more I use it, the more I like it. F# for me provides a nice balance between Python and C#. F# gives you the productivity and succinctness of a language like Python, while still having the compiler and many other neat features at your disposal.

Typically, data is represented as a list or collection of items. In F#, functions and collections are integral features of the language. Therefore, you can work with them in easy and powerful ways. When performing data transformations, you can leverage the ​fact that your data is immutable by default and by using function composition, you can chain together multiple operations into a data transformation pipeline while limiting side effects.

Although F# is a functional-first language, that doesn’t mean you can’t take advantage of object-oriented concepts like interfaces and classes. Because F# is a language that runs on .NET, you’re also able to use existing .NET libraries and NuGet packages, such as ML.NET in your applications.

Runtime

The .NET runtime is fast and performant. This is important in two scenarios, training machine learning models and deploying them. A good part of training machine learning models involves performing operations on vectors and matrices. .NET provides Single Instruction Multiple Data (SIMD) enabled types via the System.Numerics  namespace. ML.NET leverages these types where possible to increase the throughput of the training operations making training fast and efficient.

A common deployment target for machine learning models is the web. In .NET, this means deploying your models in an ASP.NET application. In benchmarks, such as TechEmpower, ASP.NET Core ranks in the top 10—making it a great option to run fast and scalable machine learning applications on.

Tooling

.NET has world class tooling across the board and you can’t go wrong with any of your choices. Visual Studio is an excellent IDE packed with tons of functionality to help developers be more productive. Alternatively, another great IDE for .NET is Jetbrains Rider. If you’re looking for a more lightweight development environment, you can also use Visual Studio Code. When working with F#, you can use the Ionide extension which makes F# development a pleasant experience.

Data science and machine learning workflows are very experimental. This means that you sometimes may want to have an interactive environment where you get feedback in near real-time of the outputs generated by your code. You’d also like a way to visualize your data inline. Within the data science community, a popular interactive computing environment is Jupyter Notebooks. You can leverage this interactive environment in .NET through .NET Interactive, which provides among many things, a kernel for you to run .NET code interactively.

Deployment Targets

.NET Core was a big step towards .NET everywhere. That’s not to say you couldn’t run .NET outside of Windows prior to .NET Core, but .NET Core was able to cohesively package things together so that you didn’t have that extra cognitive load of choosing where you want your applications to run.

Now, if you have .NET everywhere, this means at least from a deployment standpoint, you can run your machine learning models across various platforms, depending on where your users are. As mentioned previously, a common deployment scenario is hosting your machine learning model in a web API. .NET, however gives you the ability to take that same model and also run it in desktop, mobile, IoT and many other deployment targets.

The other neat thing about .NET everywhere is, you’re using the languages and tooling you’re used to. Whether you’re developing WPF, Xamarin or Unity applications, you’re typically using the same .NET languages and tooling. Therefore, you’re able to leverage your existing skills and be productive almost immediately.

Modernization

While the latest and greatest features are going into .NET Core, there are still many applications built in .NET Framework that are key to the operations of many organizations. Because ML.NET is built on .NET Standard, developers are able to add machine learning to their existing .NET Framework applications to enhance and modernize the capabilities of that application.

Extensible

Although .NET is great, a large portion of the data science and machine learning space is predominantly made up of libraries and frameworks built in Python. That however does not limit ML.NET because it is extensible. ML.NET supports working with TensorFlow and Open Neural Network Exchange (ONNX) models. TensorFlow is a popular platform for building machine learning models. Using TensorFlow.NET, a set of C# bindings for TensorFlow, users can train and deploy TensorFlow models with ML.NET. ONNX is an open format built to represent machine learning models. This means that you can train a model in other popular tools and frameworks like Azure Custom Vision, PyTorch, Scikit Learn. Then, you can export or convert that model to ONNX and consume it using ML.NET.

Open Source & Community

ML.NET, like .NET, is open source. This allows for the community to collaborate and contribute to it. Users have various ways of contributing to ML.NET, whether it’s raising issues, updating documentation or submitting pull requests, they’re all valuable contributions that only help make the framework that much better for everyone to use.

ML.NET has a vibrant and passionate community. Being open source gives visibility into the work being done which allows community members to identify opportunities to improve the core ML.NET code, build solutions around it, or provide training materials. One example of this is MLOps.NET, an open source library for tracking the lifecycle of ML.NET machine learning models. Others include videos, blog posts, and even its very own conference (make sure to stay tuned for updates on an upcoming hackathon).

These are only a few of the many reasons why I enjoy ML.NET. While it’s still relatively new, I foresee healthy growth and a bright future as more individuals use and contribute to it.

Correct me if I’m wrong, but I believe a big mission of ML.NET is making machine learning accessible—that is, I shouldn’t have to be an expert in machine learning to do it in .NET. Even still: how much should I know before I get started?

That’s right! ML.NET provides many ways of interacting with it depending on what you’re most comfortable with. The easiest way to get started is by using the tooling. The tooling provides a low-code way of training and consuming ML.NET models. If you prefer a graphical user interface, you can try Model Builder, a Visual Studio extension that guides you through the steps involved in training a machine learning model. As long as you have a general sense of the problem you’re trying to solve (classify text, predict a number, categorize images) and you have a dataset, Model Builder takes care of the rest.

Once your model is trained, code to retrain and consume your model is automatically generated for you, making it even easier to integrate your model into your end ​user applications. Some scenarios like image classification are resource intensive. As a result, you have the option of using a GPU locally or in Azure.

Alternatively, if you prefer working on the command line you can use the ML.NET CLI, a .NET command line tool for training ML.NET model models and generating consumption code. The idea is very much the same as Model Builder, except now you interact with the tooling via the command line. The CLI is also a great choice for Machine Learning Operations (MLOps) scenarios where model training and deployment is done as part of a continuous integration (CI) or continuous deployment (CD) pipeline.

For folks who want more control, prefer a code-first approach, or are more familiar with machine learning concepts, there’s other ways of using ML.NET. One is with the ML.NET Automated ML (Auto ML) API. The AutoML API is leveraged by the tooling to try to find the “best” model. The best model for your problem depends on many factors such as the quantity and distribution of your data and time to train. Therefore, it helps to try different algorithms with different parameters.

Exploring various algorithms can be time consuming and inefficient if done by brute force. By using the AutoML API, you can automate this exploratory phase to find you the best model while at the same time gaining access to various settings provided by the API. Often times, it’s good to use the AutoML API as a starting point to help guide you in choosing the best algorithm to solve your problem. You can then choose to deploy the trained model or further refine the model using the ML.NET API.

If you want full control over your machine learning pipeline, you can use the ML.NET API. The API provides you with direct access to data loaders, transformations, trainers, and prediction components that you can configure as needed to solve your problem.

One of the nice things is, none of the ways of using ML.NET is mutually exclusive. You can start off with the tooling to bootstrap the model training process and from there use the ML.NET API to make further refinements. In the end, it’s all about choice and depending on your experience with machine learning and preferred workflow, there’s an option for you.

You can connect with Luis Quintanilla at his blog or on Twitter.

]]>
<![CDATA[ Use Project Tye to simplify your .NET microservice development experience (part 2) ]]> https://www.daveabrock.com/2020/08/27/microservices-with-tye-2/ 608c3e3df4327a003ba2fe53 Wed, 26 Aug 2020 19:00:00 -0500 In the last post, I introduced Project Tye: what it is, the Blazor-powered dashboard, monitoring services, adding dependencies, and working with the optional configuration file. In my opinion, the content of that post alone made Tye worth it, but a huge use case is cutting through the complexities of containerized applications and being able to simplify deployment scenarios.

You’re probably aware of the reputation of Kubernetes of being extremely complex. My experience with it is that of many: bewildering at first, but ultimately beneficial. For many of us, we want to take advantage of Kubernetes without having to worry about being an expert, or spending hours (or days) on configuration.

In this post, we’re going to look at how Project Tye can help us deploy our containerized application to Kubernetes effortlessly. As in the last post, fire up your favorite terminal and let’s get started.

Docker is a technology that allows you to deploy and run applications in containers. Kubernetes is a system that allows you to manage containerized apps across nodes. If this isn’t making sense, hit up the Docker and Kubernetes documentation to learn more. An in-depth discussion of Kubernetes and Docker is outside the scope of this post.

Before we get started, make sure you have the .NET Core 3.1 SDK installed. You won’t get very far if you don’t.

This post covers the following topics.

Prerequisites before starting

Fun fact: if you want to get up and running with Tye quickly, you can execute tye run and not have to worry about containerizing your apps. In the last post, to keep things simple, that’s exactly what we did. (If we wanted, we could have executed tye build to build containers for our application.)

Now that we know enough to be dangerous, it’s time to get real and utilize containers for our app. While Project Tye will do a lot of the heavy lifting for us, we still need to get the pieces in place for Tye to do its magic.

Before deploying with Project Tye, you need the following:

  • Docker installed on your system
  • A registry to store containers—Docker uses DockerHub by default, or you could use something like Azure Container Registry (we’ll be doing ACR)
  • Some sort of Kubernetes cluster (AKS, Kubernetes in Docker, Minikube, and so on)

We will perform the last two steps now.

Create ACR and AKS instances

We’ll need some sort of container registry and Kubernetes cluster for Tye to use. I’ll be using the Azure Container Registry (ACR) and Azure Kubernetes Service (AKS), both Azure services. Here’s how to set that up. (If you’ve got a registry and cluster all ready, feel free to skip past this section.)

From the Azure Portal, search for “container” and click Container registries. Then, click +Add to create a new registry.

From the Create container registry screen, enter a subscription, resource group, unique registry name, location, and SKU. The Basic SKU should be fine for our purposes. Once complete, click Review + Create, then Create.

create ACR instance

Now, we’re ready to create our AKS instance. Again from the search bar at the top of the Azure Portal screen: search for “aks”, then click Kubernetes services. Fill out a subscription, resource group, cluster name, accept and accept the rest of the defaults.

create aks cluster - basics

Don’t create the resource yet!

Next, pop on over to the Integrations tab. This is important: select the registry you just created from the drop-down list, then click Review + Create, then Create. It’ll take a few minutes to complete resource creation.

create aks cluster - integration

The Kubernetes command-line tool, kubectl, needs to know about the cluster. To do so, call the Azure CLI from your local machine (you first may need to call az login, or az acr login --name {registry_name}, I had to do the latter).

az aks get-credentials --resource-group {resource-group} --name {cluster-name}

Once that completes, you can execute kubectl config view to view and verify your local Kubernetes configuration. Here’s both commands in one handy screenshot.

tye run

Deploy our dependency ourselves

Remember our Redis dependency from the last post? We will have to deploy this ourselves. Why doesn’t Tye do this for us? This is by design. Your dependencies are your dependencies, likely already configured with ports and connection strings. The assumption is that these are already set up by you, so Tye doesn’t need to make assumptions or create a new instance for you.

Borrowing from the Tye introductory post from Microsoft, we’ll take the existing configuration from Tye’s GitHub using kubectl apply:

kubectl apply -f https://raw.githubusercontent.com/dotnet/tye/master/docs/tutorials/hello-tye/redis.yaml

Our first deploy

We’re ready to try out our first deploy!

Before we run tye deploy it’s important to note that Tye will use your existing credentials to push to Docker and access your Kubernetes clusters—so Tye will be using your existing context if you do nothing.

That is done by, you guessed it, tye deploy. This first time, you’ll need to append the --interactive flag. Using this, Tye will request a few things.

  • Container Registry - enter myregistry.azurecr.io (if you are using ACR) or your user name for dockerhub
  • Connection string for redis - enter redis:6379, assuming you used the same deploy (if not, use a specified port)

For this to work, I had to do a docker login first, but your experience may vary.

Using the --interactive flag is a one-time step for your distributed application so that Tye is aware of your registry and for Tye to register the Kubernetes secret for our external dependency (Redis).

After execution, your terminal will see a lot of logged activity. Here’s the gist of what’s going on.

  • Publishes your projects
  • Builds Docker images for each projects and push them to the registry
  • Pulls images from your cluster
  • Creates manifests and service definitions
  • Generates Kubernetes Deployment and Service for each project, and applies them to our context

You can confirm everything by executing kubectl get pods from your terminal. Here’s what I see:

NAME                          READY   STATUS    RESTARTS   AGE
marvel-api-6d479df46d-hlmrp   1/1     Running   0          9m
marvel-web-744dbb6bf8-98d4g   1/1     Running   0          9m
redis-58897bf8c-p72tz         1/1     Running   0          13m

Microsoft has noted that because Tye does not automatically enable TLS in the cluster, traffic occurs over HTTP. The team might look to enable TLS in the future.

By the way, you can include your registry in your tye.yaml file to prevent the --interactive step, if needed. It’s as simple as including this in your file:

registry: {my-registry-name}

This customization is ideal for CI/CD scenarios.

Port-forward to access our application

We are deployed! So, how do we access our app? We’ll want to access the web app from outside of our Kubernetes cluster. To do so, we’ll use port-forwarding from the Kubernetes CLI:

kubectl port-forward svc/marvel-web 5000:80

We’re in business! Project Tye deployed our app to AKS for us. If you browse to http://localhost:5000, our trusty app should be up and running! Feel free to check out Application Insights for your cluster to see it in action.

In a world where “simple” and “Kubernetes” hardly ever share the same sentence, Project Tye was able to do it with just a tye.yaml file. Tye was able to set up all the environment variables to us, for all our services to communicate with each other, without any intervention from us.

Clean up

If you’d like to clean up after trying this out, here’s what to do:

  • Remove Tye deployment - run tye undeploy (run tye undeploy --what-if for a preview)
  • Delete Redis deployment - run kubectl delete deployment/redis svc/redis
  • Delete AKS cluster - from the Azure CLI, run az aks delete --name {my-cluster} --resource-group {my-resource-group}

Wrapping up

In this post, we created Azure Container Registry (ACR) and Azure Kubernetes Services (AKS) instances, deployed an external dependency, and deployed our app to Kubernetes from Project Tye. Then, we used port-forwarding to provide the ability to run our app locally outside of our cluster.

I hope you enjoyed this introductory two-part series on Project Tye. I realize it was simple with just two applications and a dependency—this was intentional. As Tye evolves, I’d like to dig a little deeper on a complex real-world app and put debugging through its paces (which is still being worked on). It’s early but hopefully you can already see that this powerful tool takes a lot of headaches out of developing microservices in .NET, which is all you can ask.

References

Some content that was helpful in writing this post, and some supplementary information that might assist you:

]]>
<![CDATA[ The .NET Stacks #13: .NET 5 Preview 8 and Blazor, interview with Scott Addie, community links! ]]> https://www.daveabrock.com/2020/08/22/dotnet-stacks-13/ 608c3e3df4327a003ba2fe52 Fri, 21 Aug 2020 19:00:00 -0500 Happy Monday to you! Here’s what we have for this week:

  • How Blazor is drastically improving in preview 8 of .NET 5
  • A discussion with Scott Addie
  • A trip around the community

.NET 5 Preview 8 ♥’s Blazor

In a few weeks, we’ll see preview 8 of .NET 5, the last preview before the release candidates, and it’ll be filled with a ton of Blazor improvements for performance and tooling. In this week’s ASP.NET community standup, the team showed off a lot of these new features that folks have been asking for.

The biggest takeaway is that, with preview 8, Blazor is using the .NET 5 runtime and libraries—and with it, its improvements. Previously, Blazor Web Assembly was using Mono libraries—which seemed a little off, when thinking about the “One .NET” mission of .NET 5. Using the .NET 5 runtime now accounts for a ton of performance gains: the team reports that arbitrary CPU code in .NET 5 is around 30% faster (think dictionaries and string comparisons), JSON serialization and deserialization looks to be about twice as fast, and Blazor component rendering is anywhere from 2x to 4x faster.

Microsoft has traditional C# devs sold on Blazor—one toolchain, one language, much awesomeness. We get it. To really make an impact, and welcome other folks to .NET, Blazor will need to have features that folks demand from their other SPA frameworks and libraries like Angular and React (or other tools in that space). For example, if I ask a JS expert (and .NET outsider) to try a new SPA competitor, and there’s no auto-refresh, it’s hard to be taken seriously.

With preview 8, we’re seeing quite a few of these features. Here’s three of them.

CSS isolation

When working with components, the idea is that they can be portable, organized elements. If I have a <DatePicker> component, for example, I should be able to drop it and its styles wherever I need. Until preview 8, I would have to style my components in some CSS file somewhere else, and bring in that style sheet, and worry about maintaining that and dealing with cascading impacts throughout my site.

No more. Now, with CSS isolation, you can make life easier on yourself. By using a naming convention like myComponent.razor.css, Blazor will bind the CSS to your component with no imports needed! And since that’s only bound to your component you can simplify your styles. Instead of declaring something like <table class="myComponentStyle"> just style a table in your component’s stylesheet and done!

What about JS isolation? It isn’t ready now, but it’s on the way.

Lazy loading

Let’s say you have parts of your app that aren’t accessed every time. For example, reporting that’s generally accessed once a month. With lazy loading, you can load any assemblies only when needed. In your routing configuration, you can inject a lazy loading service and specify which assemblies to load and when. No more customized tree shaking.

Auto-refresh

The instant feedback loop is a big reason why so many people are drawn to front-end development. I can make a change, click Save (or not even Save!) and my changes are reflected immediately. This is thanks for incremental build tooling running in the background.

.NET has its own incremental file watcher, dotnet watch, that not a lot of people know about. Blazor will use dotnet watch behind the scenes to provide the auth-refresh capability.

There’s much more coming in preview 8, so definitely check out the video if you’re so inclined—and watch this space for more details when it arrives.

Dev Discussions: Scott Addie

Have you worked in ASP.NET Core? If so, you have surely come across Scott Addie, whether you know about it or not. For over three years, Scott has worked as a content developer at Microsoft—publishing documentation on the framework, APIs, ebooks, blog posts, and Learn modules, and more.

He’s also been active in the developer community—both before and after joining Microsoft—and you can find him as a co-host on The .NET Docs Show, a weekly Twitch stream.

I recently caught up with Scott to talk about his work in the community, his career, .NET 5, and what he’s working on.

Scott Addie

Walk us through your role.

For over three years, I’ve been a content developer on the .NET, Web Development, & Languages team of the Developer Relations division. My primary responsibility is to produce technical content focused on teaching ASP.NET Core. I’m provided creative freedom in how to best accomplish that goal. Ultimately, the ability to effectively communicate complex ideas, in both verbal and written forms, is critical to my success in this role.

What kind of projects are you hacking away at now?

Bill Wagner, Maxime Rouiller, and myself have been collaborating on a tool that generates “What’s New” pages for conceptual docsets hosted on docs.microsoft.com. It’s a .NET Core console app written in C# that uses the GitHub GraphQL API to process pull requests. I plan to distribute the tool internally as a .NET Core global tool. The idea is to provide customers with a summary of what changed in a particular product’s documentation during a specific time period. For example, the ASP.NET Core documentation publishes “What’s New” pages on a monthly cadence. At the time of writing, the concept has been adopted for the following products:

I’m actively working with a handful of other teams, including C++, Visual Studio, and SQL, to roll out these pages. There’s a sizeable features backlog for this tool. Some feature ideas include a “What’s New” hub page and an RSS feed.

My approach has always been: hit up the docs for just-in-time knowledge, or quick learning, and Learn modules for in-depth and hands-on content. Is that Microsoft’s approach?

The team offers content for folks at every level in their journey. If you’re a seasoned ASP.NET Core developer, the conceptual content and API reference content are great resources. If you’re new to .NET, the Learn .NET page is a better starting point. If you want to learn more about using ASP.NET Core with Azure, Microsoft Learn is a good place to start. In many Learn modules, an Azure subscription isn’t required to test drive Azure services with .NET. You’re provided an Azure Cloud Shell instance and a set of instructions that can often be completed in under an hour. Gamification is an important aspect of Learn. As you complete modules, you earn achievement badges. Virtual trophies are awarded for the completion of learning paths.

Over at Build, a few folks discussed Learn TV. What is it and what can we expect from it?

Learn TV is a Microsoft-owned video content platform that offers 24 hours of programming, 7 days a week. If you tuned in to the recent .NET Conf: Focus on Microservices event, you may have noticed that it streamed live on Learn TV. Additionally, Learn TV streams live shows from the Developer Relations organization, such as The .NET Docs Show and 92 & Pike. It’s still early days for Learn TV, hence the “preview” label on its landing page. Golnaz Alibeigi is working with a small team of folks to evolve it in to a world-class product. I’ll defer to Golnaz for specific plans.

I know you work on a lot of Learn modules. What are you working on now?

Nish Anil, Cam Soper, and myself are authoring a .NET microservices learning path in Microsoft Learn. The learning path is based on the eBook .NET Microservices: Architecture for Containerized .NET apps and is expected to consist of seven modules. At the time of writing, the following modules have been published:

Next up is a microservices module focused on DevOps with GitHub Actions. The learning path project board is available for anyone to see.

What is your one piece of programming advice?

My advice is to ask probing questions and challenge the status quo. Regardless of your organizational rank, ask why your team operates the way they do. Ask why certain technology choices were made. The answers might surprise you. Equipped with those answers, you can begin to understand existing processes and formulate improvements. If you have an idea, don’t be afraid to speak up. Everyone brings a unique perspective to the table.

This is just a small portion of my interview with Scott! Head over to my site for more.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 Blazor

🚀 .NET Core

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Use Project Tye to simplify your .NET microservice development experience (part 1) ]]> https://www.daveabrock.com/2020/08/19/microservices-with-tye-1/ 608c3e3df4327a003ba2fe51 Tue, 18 Aug 2020 19:00:00 -0500 You don’t have to be operating at Uber scale to understand the complexities of microservices and distributed systems. In many ways, even trying to develop a web app and an API or two can give you headaches when trying to build out a decent local development experience.

In the .NET world, you’re defining ports for multiple projects, hardcoding URLs, and generally walking on egg shells in getting your different services to communicate with one another. If even the slightest configuration detail changes, you feel the pain once again. Of course, you could Docker-ize your environment, but that comes with a steep learning curve. This doesn’t even factor for the complexities of deploying a distributed system to something like Kubernetes.

Project Tye, a new-ish (since May) experimental developer tool, wants to lend a hand. Tye wants to both make development of microservices easier and also help to automate the deployment of .NET applications. It’s big on meeting you where you are—offering a convention-based approach to service discovery and dependencies, and make containerizing of .NET applications automatic, all by using a single configuration file.

We’ll be walking through Project Tye in two separate posts. This post will focus on how Tye can help your local development experience. In the next post, we’ll work on deploying to Kubernetes.

Before we get started, make sure you have the .NET Core 3.1 SDK installed. You won’t get very far if you don’t.

This post covers the following topics.

Now, fire up your terminal and let’s get started!

Install Tye

First, we must install Tye. Tye is a global .NET tool.

Execute this from your terminal:

dotnet tool install -g Microsoft.Tye --version "0.4.0-alpha.20371.1"

Because Tye isn’t stable yet, you need to append --version to it with your desired version. Depending when you read this, the version above might not be the latest. If you leave off --version you’ll get a listing of the available current versions.

Set up our projects

Now, we’ll create two projects under a single solution: a web project and an API project. Create a folder for this (or just do a mkdir my-project from the terminal).

We’re going to do a quick app that loads some Marvel characters for us. Execute the following from your newly created folder to create our web app, a Razor pages solution:

dotnet new razor -n marvel-web

And now, without moving folders, create our API project:

dotnet new webapi -n marvel-api

Let’s add both projects to our solution:

dotnet new sln
dotnet sln add marvel-web marvel-api

And now, you can run Tye in the same folder as your new solution file:

tye run

There’s a lot going on here. Tye is building your projects, launching your services, and creating a dashboard for you at http://127.0.0.1:8000. What’s important here: if you do nothing, ASP.NET Core will assign your app’s listening ports randomly, freeing you from the pain of port conflicts.

tye run

The main page of the dashboard shows you all your services, your bindings (discoverable URLs), and more.

tye run

If you click the name of a service (like marvel-web), you can access real-time metrics as it’s running:

tye run

And, of course, what would a dashboard be without logs?

tye run

Of course, this is cool but not very exciting. Let’s add some code to our apps.

Build out a simple API

Now, open up the marvel-api project with your favorite editor (like Visual Studio or Visual Studio Code). At the root, rename the WeatherForecast.cs file to Character.cs and replace the contents of the file with this:

namespace marvel_api
{
    public class Character
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Status { get; set; }
    }
}

Then, rename the controller to CharactersController.cs. In it, we’ll set up 10 Marvel characters. When we go a get, we’ll retrieve five random ones. (In a little bit, we’ll be able to use this to show off our Redis caching integration.)

using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;

namespace marvel_api.Controllers
{
    [ApiController]
    [Route("[controller]")]
    public class CharactersController : ControllerBase
    {
        private static readonly Character[] Characters = new[]
        {
            new Character() { 
                FirstName = "Anthony",
                LastName = "Stark",
                Status = "Deceased"
            },
            new Character()
            {
                FirstName = "Steven",
                LastName = "Rogers",
                Status = "Alive"
            },
            new Character()
            {
                FirstName = "Peter",
                LastName = "Quill",
                Status = "Alive"
            },
            new Character()
            {
                FirstName = "Thor",
                LastName = "Odinson",
                Status = "Alive"
            },
            new Character()
            {
                FirstName = "Natalia",
                LastName = "Romanoff",
                Status = "Deceased"
            },
            new Character()
            {
                FirstName = "T'Challa",
                LastName = null,
                Status = "Alive"
            },
            new Character()
            {
                FirstName = "Bruce",
                LastName = "Banner",
                Status = "Alive"
            },
            new Character()
            {
                FirstName = "Scott",
                LastName = "Lang",
                Status = "Alive"
            },
            new Character()
            {
                FirstName = "Phillip",
                LastName = "Coulson",
                Status = "Deceased"
            },
            new Character()
            {
                FirstName = "Nick",
                LastName = "Fury",
                Status = "Alive"
            }
        };

        private readonly ILogger<CharactersController> _logger;

        public CharactersController(ILogger<CharactersController> logger)
        {
            _logger = logger;
        }

        [HttpGet]
        public IEnumerable<Character> Get()
        {
            var rand = new Random();
            return Characters.OrderBy(x => rand.Next()).Take(5);
        }
    }
}

That should be all we need for the API for the time being. Now, let’s open the marvel-web project.

Build out web project

In this project, also add a Character class (or you could always create a shared project, as it’s identical between the API and web projects):

namespace marvel_web
{
    public class Character
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Status { get; set; }
    }
}

Then, create a CharacterClient.cs class that calls our API.

using System.Collections.Generic;
using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks;

namespace marvel_web
{
    public class CharacterClient
    {
        private readonly HttpClient _client;

        public CharacterClient(HttpClient client)
        {
            _client = client;
        }

        public async Task<List<Character>> GetCharactersAsync()
        {
            var response = await _client.GetAsync("/characters");
            var stream = await response.Content.ReadAsStreamAsync();
            return await JsonSerializer.DeserializeAsync<List<Character>>(stream);
        }
    }
}

In the Index page model, at Index.cshtml.cs, add a Characters property and change OnGet to call the client we created:

using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.RazorPages;
using Microsoft.Extensions.Logging;

namespace marvel_web.Pages
{
    public class IndexModel : PageModel
    {
        private readonly ILogger<IndexModel> _logger;

        public IndexModel(ILogger<IndexModel> logger)
        {
            _logger = logger;
        }

        public List<Character> Characters { get; set; }

        public async Task OnGet([FromServices] CharacterClient client)
        {
            Characters = await client.GetCharactersAsync();
        }
    }
}

Next, open Index.cshtml to lay out our table:

@page
@model IndexModel

<div class="text-center">
    <h1 class="display-4">Random Marvel Characters</h1>
</div>

<table class="table">
    <thead>
        <tr>
            <th>First Name</th>
            <th>Last Name</th>
            <th>Status</th>
        </tr>
    </thead>
    <tbody>
        @foreach (var character in @Model.Characters)
        {
            <tr>
                <td>@character.FirstName</td>
                <td>@character.LastName</td>
                <td>@character.Status</td>
            </tr>
        }
    </tbody>
</table>

You might be thinking: OK, how does my web app communicate with my API?

That magic will be done through the Microsoft.Tye.Extensions.Configuration NuGet package, that you can install from either the dotnet CLI or the NuGet Package Manager in Visual Studio itself. You may already be familiar with the Microsoft.Extensions.Configuration system that ships with any ASP.NET Core project. The Tye package provides Tye-specific extension methods on top of Microsoft.Extensions.Configuration.

Open up Startup.cs and change ConfigureServices to this:

public void ConfigureServices(IServiceCollection services)
{
    services.AddRazorPages();

    services.AddHttpClient<CharacterClient>(client =>
    {
        client.BaseAddress = Configuration.GetServiceUri("marvel-api");
    });
}

This is where the service discovery takes place. (Feel free to geek out on how Tye does service discovery.) What a world—no port numbers, no brittle configuration. It just works. Execute tye run now—and go to the web project. (Also, if you click over to the logs, it’s a lot more insightful now.)

random characters

Adding a dependency

At this point, things are great—we’ve been able to take a better look at code we manage and write. In the real world, that is rarely the case. You use libraries and dependencies you don’t manage—as a matter of fact, the whole point is to have the complexity managed for you, but how can Tye discover these dependencies?

Using a configuration file

Now that things are getting a little more involved, it’d be a good time to ask you to run tye init. When you do this, Tye drops an optional configuration file (tye.yaml) that allows you to customize your settings. When I did this, Tye populated the file with information about my current setup. (Tye provides documentation on the schema, as well.)

# tye application configuration file
# read all about it at https://github.com/dotnet/tye
#
# when you've given us a try, we'd love to know what you think:
#    https://aka.ms/AA7q20u
#
name: marvel-tye
services:
- name: marvel-web
  project: marvel-web/marvel-web.csproj
- name: marvel-api
  project: marvel-api/marvel-api.csproj

Working with Redis

To see how Tye works with external dependencies, we’ll add a Redis cache (thanks to the announcement for the inspiration). First, we’ll refactor our API’s Get method to use our IDistributedCache interface.

//using Microsoft.Extensions.Caching.Distributed;
//using System.Text.Json;

[HttpGet]
public async Task<string> Get([FromServices]IDistributedCache cache)
{
    var characters = await cache.GetStringAsync("characters");

    if (characters == null)
    {
        var rand = new Random();
        var randomCharacters = Characters.OrderBy(x => rand.Next()).Take(5);

        characters = JsonSerializer.Serialize(randomCharacters);

        await cache.SetStringAsync("characters", characters, new DistributedCacheEntryOptions
        {
            AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(3)
        });
    }

    return characters;
}

For our API to use the IDistributedCache, we need to inject it in our Startup class in our API. After you fetch the Microsoft.Extensions.Caching.StackExchangeRedis NuGet package, do this in Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllers();

    services.AddStackExchangeRedisCache(options =>
    {
      options.Configuration = Configuration.GetConnectionString("redis");
    });
}

Again, admire the beauty of no port hardcoding. Instead, the Tye host binds redis to the configuration string for our redis service. Next, let’s update our new configuration file to use redis. As in the announcement, we can add a redis-cli service to monitor redis traffic.

name: marvel-tye
services:
- name: marvel-web
  project: marvel-web/marvel-web.csproj
- name: marvel-api
  project: marvel-api/marvel-api.csproj
- name: redis
  image: redis
  bindings:
  - port: 6379
    connectionString: "${host}:${port}"
- name: redis-cli
  image: redis
  args: "redis-cli -h redis MONITOR"

Now, if you do a tye run, pull up the dashboard and note the addition of our redis components. If you go to the web project, we should see new randomized characters every three seconds.

Learn more

There are so many possibilities with Tye—head on over to the GitHub repository to learn more. There’s tons of samples there to get you started, including apps with Angular, MongoDB, nginx, Azure Functions, and much more. And if it’s missing something, let me team know using a GitHub issue or the feedback button in the Tye dashboard.

Wrapping up

In this post, we introduced Project Tye and talked about how it can improve your experience when building .NET distributed applications. We installed Tye, toured the dashboard, and built out a simple web and API project. We showed how easy service discovery is with Tye, and also added a dependency.

In the next post, we’ll talk about how Tye can assist you with deploying our app to Kubernetes. Stay tuned!

]]>
<![CDATA[ The .NET Stacks #12: Azure DevOps or GitHub Actions, .NET Foundation results, community links! ]]> https://www.daveabrock.com/2020/08/15/dotnet-stacks-12/ 608c3e3df4327a003ba2fe50 Fri, 14 Aug 2020 19:00:00 -0500 Happy Monday, everyone! Current status: still wondering if I’m just a JSON pusher (warning: link has a naughty word).

This week, we:

  • Discuss how Azure DevOps and GitHub Actions will be ironed out
  • Discuss the .NET Foundation election results
  • Check out a busy week in the community

GitHub Actions or Azure DevOps?

Hopefully by now, in 2020, you know that friends don’t let friends right-click publish. There are so many tools you can use to work with continuous integration and continuous deployment pipelines—I’d like to talk about two under Microsoft’s ownership: Azure DevOps and GitHub Actions.

As a .NET developer, you’re probably familiar with Azure DevOps (previously Team Foundation Server). It’s got a decade of battle-tested use in enterprises, with well-featured CI/CD pipelines. If you want best-in-class Microsoft tooling with either your on-prem or cloud deployments, it’s always been a clear choice.

The lines blurred a little when, in 2018, Microsoft acquired GitHub. GitHub, traditionally known for being the community’s preferred way to share code, was now bought by a company invested in open source development and an interest in bringing Microsoft-ness to new audiences. GitHub’s own CI/CD solution, GitHub Actions, is getting a lot of well-deserved recognition. And with npm joining GitHub, GitHub will definitely help improve experiences with dependency management. To further blur the lines, a lot of folks don’t know about their GitHub Enterprise offering, an on-prem solution that even has integration capabilities with Active Directory!

So what does this say for the future of Microsoft CI/CD pipelines? As a long-term strategy, Microsoft can’t expect to run two similar products with internal competition. I actually wondered about this not so long ago and was referred to an insightful conversation that occurred on the Azure Podcast with GitHub PM Sasha Rosenbaum.

She agreed that, yes, the investments will be geared toward GitHub and its vibrant community and long-term GitHub will probably win out. This isn’t to say that you need to migrate in 6 months or anything—she even mentioned a 5-year timeframe—but the future is definitely with GitHub Actions. As such, her recommendation was that there’s nothing wrong with using Azure DevOps today, but if you’re starting a new project to look into using GitHub.

Until the time comes when Microsoft has one CI/CD tool, you’ll likely see GitHub Actions build out with a stronger feature set. It’s already robust—and with its CI capabilities are right there with Azure DevOps—but has some work to do until it’s on-par with its continuous deployment functionality (release pipelines, gates, advanced permissions, and whatnot).

.NET Foundation Board of Directors election results are in

This week, we found out who would serve on the .NET Foundation Board of Directors. Out of 17 nominees, these six folks came out on top (with Beth Massi on the 7th spot as Microsoft’s one appointed board member):

It’s great to see an entirely new Board and am looking forward to seeing what comes from these fresh perspectives.

🌎 Last week in the .NET world

The Top 3

📢 Announcements

📅 Community and events

😎 Blazor

🚀 .NET Core

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

Over at the .NET Rocks podcast, they discuss adding identity to your applications.

🎥 Videos

]]>
<![CDATA[ Dev Discussions - Scott Addie ]]> https://www.daveabrock.com/2020/08/15/dev-discussions-scott-addie/ 608c3e3df4327a003ba2fe4f Fri, 14 Aug 2020 19:00:00 -0500 This is the full interview from my discussion with Scott Addie in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away!

Have you worked in ASP.NET Core? If so, you have surely come across Scott Addie, whether you know about it or not. For over three years, Scott has worked as a content developer at Microsoft—publishing documentation on the framework, APIs, ebooks, blog posts, and Learn modules, and more.

He’s also been active in the developer community—both before and after joining Microsoft—and you can find him as a co-host on The .NET Docs Show, a weekly Twitch stream.

I recently caught up with Scott to talk about his work in the community, his career, .NET 5, and what he’s working on.

Scott Addie

Walk us through your role and how you got there in your career?

For over three years, I’ve been a content developer on the .NET, Web Development, & Languages team of the Developer Relations division. My primary responsibility is to produce technical content focused on teaching ASP.NET Core. I’m provided creative freedom in how to best accomplish that goal. Ultimately, the ability to effectively communicate complex ideas, in both verbal and written forms, is critical to my success in this role.

Content I’ve produced includes:

  • Conceptual documentation on framework features
  • API reference documentation
  • eBooks
  • Blog posts
  • Interactive, browser-based courses
  • Breakout sessions & workshops for conferences, code camps, & user groups
  • Conference keynotes
  • Webinars
  • Live streams
  • Podcasts

You’ll find much of this content on Microsoft and third-party properties such as docs.microsoft.com, Channel 9, Learn, Learn TV, dev.to, Twitch, and YouTube.

The journey to the mothership

I’m glad you asked this question. I promised myself I’d write about the journey at some point. Now I have an excuse. To understand how I got here, let’s recount the transformative events. Many people helped me get there, so I want to give credit where it’s due.

Rewind to the year 2013. I was building line-of-business apps in the financial services industry and was searching for ways to differentiate myself. .NET was a battle-tested, yet tired technology, but it allowed me to make a decent living. .NET was being used to build almost all new apps at the company. My employer was willing to cover the cost of a .NET certification, so I pursued it. Nine months of 2013 were spent tirelessly studying for and taking exams to earn my Microsoft Certified Solutions Developer (MCSD) certification.

One day while having lunch with some colleagues in the cafeteria, the witty banter turned to the topic of developer conferences. Conferences were foreign to me, but all the talk about Microsoft’s TechEd event excited me. The prospects of networking opportunities with like-minded attendees and access to engineers building .NET and Visual Studio were too good to pass up. This opportunity would also help me better prepare for the MCSD exams. I returned to my desk and told my manager I wanted to discuss TechEd at our upcoming 1-on-1. After several hours of persuasion, my manager approved the projected expenses and agreed to send me to TechEd 2013 in New Orleans. Shout out to Ruth Hands, my manager at the time. Ruth continually went to bat for me. She was supportive of my ideas and was an accelerant for my career.

On the flight to New Orleans later that year, I studied the session schedule and noticed countless “developer celebrities”. I built a schedule of sessions to attend that week. While at the conference, I attended breakout sessions from John Papa, Daniel Roth, Julie Lerman, Rowan Miller, and many others. I also attended an “Ask the Experts” session facilitated by Julie Lerman on the topic of Entity Framework. The conference’s content was amazing! After a week of breakout sessions, I felt more prepared than ever for my next MCSD exam. I was mesmerized by the presenters’ skill in delivering technical content to such a large and diverse audience. Conference speaking is what I wanted to do one day.

TechEd 2013 badge

Skip ahead to February 2014. I drove to the local testing center and completed the last of three exams for the MCSD certification. I passed! Equipped with a plethora of fresh .NET knowledge from the certification, I was inspired to share what I learned. With the birth of my daughter the following month, I became a father! It was a relief to earn my certification just weeks before her birth. The next couple of months would be challenging as I navigated parenting and struggled through sleep deprivation. After spending a few days with my daughter, there was a strong desire to kick my career into high gear to support her. I began to ponder what other changes I could make to differentiate myself.

It was early January 2015. A friend at work, named Matt Frye, stopped by my desk unannounced (as he always did). Matt mentioned a local .NET user group called MADdotNET. Matt attended some meetings in the past and encouraged me to attend. The January meeting’s topic was KnockoutJS. John Papa had inspired me to use KnockoutJS in his TechEd 2013 session. January marked my first user group meeting. I was hooked. It was after that meeting that I volunteered to give a talk at MADdotNET. April 1 was the first available date. I figured the attendance would be low for two reasons: I was new to the speaking circuit and was speaking on April Fools’ Day.

34 people were in attendance for my talk about using JavaScript task runners with ASP.NET MVC. To say I was nervous would be an understatement. I felt sick to my stomach, and I had absolutely no idea what I was doing. The feedback at the end of the meeting was encouraging. Several folks in the audience came up afterwards and personally thanked me for teaching them something new. The organizer, Lance Larsen, also provided much needed positive reinforcement and invited me back to speak again. To keep the momentum alive, I volunteered to deliver the same talk at another local user group. This group was a bit smaller, focused on mobile development, and was run by Matt Soucoup and Matt Snyder. Through Matt Soucoup, I met other passionate .NET community members, including Mitch Muenster, who later became a Xamarin MVP.

I caught the conference bug in 2013 and the speaking bug in 2015. If you’ve caught either one, then you already understand that they’re highly addictive substances. For the next couple years, I actively participated in both. A highlight from the early speaking years included meeting folks like Ashley Grant, Cecil Phillip, Jeremy Likness, and Rachel Appel at South Florida Code Camp. Another highlight was getting interviewed by Seth Juarez on the main stage at That Conference 2016:

That Conference 2016 interview with Seth

While attending That Conference 2016, I stopped by an open spaces session about authoring for Pluralsight. Course authoring was another avenue I wanted to explore. I met David Berry, an accomplished Pluralsight author from the area, and Adam Mumma, a Pluralsight acquisitions editor. As an aside, David now works on another content team at Microsoft. After chatting with both David and Adam, I decided to audition for Pluralsight. After spending two weeks creating my demo, I decided this opportunity wasn’t for me. Talking into a microphone with no audience didn’t give me the adrenaline rush that’s inherent to conference speaking. I called it quits after giving it a fair shot.

My speaking progression took me from user groups to code camps to regional conferences. Through attending conferences and writing talks, I discovered some indispensable tools: Twitter and GitHub. At the time, Twitter was solely a tool for me to follow industry luminaries. In fact, I created a Twitter account in 2013 to stay in touch with folks I’d met and seen speak at TechEd 2013. GitHub became the tool I used for uploading slide decks for my talks and sharing with attendees.

In the beta releases of ASP.NET 5 (the precursor to ASP.NET Core), I invested some time reading the Microsoft docs to better understand ASP.NET 5. I wanted to build conference talks on the topic and introduce it to my team at work once the product had reached general availability. It was during this period of learning that I delivered a talk at the inaugural MKE DOT NET conference. MKE DOT NET was organized by a small team of people at Centare, including Steve Hicks. Steve was a kickball teammate, mentor in my first job out of college, and introduced me to David Pine. As yet another aside, this David is also now a member of my team at Microsoft. You could say I’m partially responsible for the DavidOverflowException at Microsoft.

As my knowledge of the ASP.NET 5 framework grew, I discovered gaps in the documentation and project templates. I wanted to address them to validate my understanding. A small documentation contribution, to what was then called ASP.NET 5, was my debut in open-source contributions on GitHub. I started a blog to share any information that didn’t belong in the official documentation. Over the course of several months, I became a top contributor to the ASP.NET 5 documentation repository on GitHub. The .NET Foundation expressed their gratitude for my docs contributions with a care package, hand-written note, and challenge coin:

.NET Foundation thank you note

My blogging and involvement on GitHub led me to collaborating with brilliant folks all over the world. Mads Kristensen read a blog post I wrote about Webpack and contacted me about collaborating on a Visual Studio extension called Webpack Task Runner. We spent some time working on that extension and a few others, the most popular of which was npm Task Runner. While building a talk on ASP.NET 5 with Visual Studio Code, I discovered the Yeoman generator for ASP.NET 5 templates. My curiosity of this tool led me to collaborating on GitHub with Sayed Hashimi, Shayne Boyer, and Piotr Blazejewicz. At the time, I was just learning Git. I began submitting pull requests and made myself look like a fool a couple of times. They provided coaching to get me up-to-speed. Over the course of a few weeks working with them, I learned a ton and started to feel more confident with Git.

There was another time when I was working with Mads Kristensen on the JSON schema store project. While hacking on a JSON schema, I discovered a bug in Visual Studio’s JSON editor. I reported it to Mads, and he introduced me to Mike Lorbetske. Mike was an engineer on the dotnet new templating experience and worked with Mads on the JSON editor in Visual Studio. Mike arranged a call to discuss the defect in greater detail. At some point in the call, we stopped “talking shop”, and he asked “Have you ever considered coming to work for Microsoft?”. I hadn’t considered myself Microsoft material at that point, so I didn’t have a great answer. Mike’s question got me thinking. It was time to explore that idea further and to make more intentional career decisions moving forward.

Through my developer community contributions, I was nominated for the Microsoft MVP program. And 2016 marked my first year as an MVP! Others in the program told me that the biggest perk of the MVP program is the annual MVP Summit on the Redmond campus. Based on this feedback, I made it a point to attend. At the 2016 MVP Summit, I met up with Mads Kristensen. We debated the best approach for adding Yarn support to the npm Task Runner extension. I would later implement that Yarn support on the flight home from the summit. I ran into Mike Lorbetske in between sessions, who introduced me to his teammate Sean Peters. By the way, they’re both Wisconsinites and know Steve Hicks. What a small world!

I received an email from Martin Woodward asking if I planned to be in a certain room that afternoon at that summit. No context was provided. I simply replied, “yes”. When that time came, I took a seat in the session. Thousands of other MVPs were packed into the room, and impostor syndrome hit me hard. I was surrounded by industry veterans I had looked up to for years. The session kicked off, and Scott Hanselman took the stage. The following slide was displayed:

MVP Summit 2016

I was blown away when Scott recognized me as one of five most significant contributors. I didn’t feel as though my contributions were that significant, but it felt amazing to get some recognition. That recognition made me push myself even harder, which led to my renewal in the MVP program for another year. More community contributions were made, and more networking with Microsoft employees took place.

Between 2013 and 2017, I shifted from a silent lurker to an active participant on Twitter. I also became more involved in developer community groups on LinkedIn. And in April 2017, I received two messages that would forever change my career. A Twitter and a LinkedIn message came from Rick Anderson and Scott Cate, respectively, about jobs at Microsoft. After a few phone calls, I scheduled a full day of interviews on the Redmond campus. I wasn’t totally sure what I was getting myself into. I was, however, certain that I enjoyed developer relations work. Continuing to fund my conference travel out of my own pocket and spending nights and weekends writing talks was unsustainable. Why not pursue a job that allows me to do this stuff?

On the campus, I met with folks from all over the Developer Relations division (then called APEX). My last interview was to take place with Jeff Sandquist in his office. I arrived at Jeff’s office, where I found him chatting with Seth Juarez. Seth recognized me from the That Conference 2016 interview and gave me a high five. It felt good to see another familiar face.

Skip ahead a week or so to Build 2017, and I received a call from the Microsoft recruiter with an offer to join my current team. I accepted the offer after confirming that the talent acquisition team had the right Scott.

What do you miss most about no conferences? Can any of what you miss be replaced remotely?

The “hallway track” is what I miss the most. Many conferences have time slots during which there are no topics of interest. The hallway track is the undocumented, unconference meeting to fill that void. Friendships are forged, new ideas are exchanged, career opportunities arise, and problems are solved. Some of my most actionable customer feedback was harvested in this track.

Unfortunately, the hallway track can’t be replaced in a virtual setting. There’s something about attendees being in the same physical location and practicing the Pac-Man Rule that makes this exciting. This pandemic era brings countless distractions at home that can degrade the quality of the conversations that occur in a virtual format.

David Pine, Cam Soper, and myself launched a Twitch stream in February of this year to fill some of the void. In each episode, there’s a “Hallway Track” segment in which we have an informal discussion with a guest. Live viewers can come and go and actively participate in the Twitch chat.

What kind of projects are you hacking away at now?

Bill Wagner, Maxime Rouiller, and myself have been collaborating on a tool that generates “What’s New” pages for conceptual docsets hosted on docs.microsoft.com. It’s a .NET Core console app written in C# that uses the GitHub GraphQL API to process pull requests. I plan to distribute the tool internally as a .NET Core global tool. The idea is to provide customers with a summary of what changed in a particular product’s documentation during a specific time period. For example, the ASP.NET Core documentation publishes “What’s New” pages on a monthly cadence. At the time of writing, the concept has been adopted for the following products:

I’m actively working with a handful of other teams, including C++, Visual Studio, and SQL, to roll out these pages. There’s a sizeable features backlog for this tool. Some feature ideas include a “What’s New” hub page and an RSS feed.

Visual Studio Magazine recently wrote about the .NET 5 content reorganization initiative. In a nutshell, .NET 5’s focus is consolidation. Given that theme, our documentation must adapt to the modern .NET vision. I own the .NET web development area and am collaborating with others on cross-cutting .NET fundamentals content, such as the Microsoft.Extensions family of APIs for configuration, dependency injection, and so on. In some areas, you’ll begin to see less emphasis on product names, particularly for content targeted at new customers. If a customer wants to build a web application with .NET, they shouldn’t have to know that ASP.NET is the product name. I’m tasked with solving this problem for the ASP.NET content.

My approach has always been: hit up the docs for just-in-time knowledge, or quick learning, and Learn modules for in-depth and hands-on content. Is that Microsoft’s approach?

The team offers content for folks at every level in their journey. If you’re a seasoned ASP.NET Core developer, the conceptual content and API reference content are great resources. If you’re new to .NET, the Learn .NET page is a better starting point. If you want to learn more about using ASP.NET Core with Azure, Microsoft Learn is a good place to start. In many Learn modules, an Azure subscription isn’t required to test drive Azure services with .NET. You’re provided an Azure Cloud Shell instance and a set of instructions that can often be completed in under an hour. Gamification is an important aspect of Learn. As you complete modules, you earn achievement badges. Virtual trophies are awarded for the completion of learning paths.

Over at Build, a few folks discussed Learn TV. What is it and what can we expect from it?

Learn TV is a Microsoft-owned video content platform that offers 24 hours of programming, 7 days a week. If you tuned in to the recent .NET Conf: Focus on Microservices event, you may have noticed that it streamed live on Learn TV. Additionally, Learn TV streams live shows from the Developer Relations organization, such as The .NET Docs Show and 92 & Pike. It’s still early days for Learn TV, hence the “preview” label on its landing page. Golnaz Alibeigi is working with a small team of folks to evolve it in to a world-class product. I’ll defer to Golnaz for specific plans.

I know you work on a lot of Learn modules. What are you working on now?

Nish Anil, Cam Soper, and myself are authoring a .NET microservices learning path in Microsoft Learn. The learning path is based on the eBook .NET Microservices: Architecture for Containerized .NET apps and is expected to consist of seven modules. At the time of writing, the following modules have been published:

Next up is a microservices module focused on DevOps with GitHub Actions. The learning path project board is available for anyone to see.

What is your one piece of programming advice?

My advice is to ask probing questions and challenge the status quo. Regardless of your organizational rank, ask why your team operates the way they do. Ask why certain technology choices were made. The answers might surprise you. Equipped with those answers, you can begin to understand existing processes and formulate improvements. If you have an idea, don’t be afraid to speak up. Everyone brings a unique perspective to the table.

You can follow Scott Addie on Twitter.

]]>
<![CDATA[ C# 9: Records Revisited ]]> https://www.daveabrock.com/2020/08/14/records-spec/ 608c3e3df4327a003ba2fe4e Thu, 13 Aug 2020 19:00:00 -0500 We had a lot of fun playing with the new C# 9 features over the last few months. As with learning anything brand new, we had to decipher the info the best we could: we went off the announcement in May during Build, meeting minutes and GitHub issues from the team, and good old trial and error.

With .NET 5 fast approaching this fall, things have taken shape and Microsoft has released the set of C# 9 specification proposals. These proposals offer clarity on the new features and help to reinforce things I’ve heard or saw, and also allowed me to learn a thing or two.

I wanted to quickly share with you some new things I came across while scanning the records proposal.

What are records?

What are records?

Short-ish answer: a record is a construct that allows you to encapsulate property state by providing immutable behavior without the need for extensive boilerplate (like for readonly structs). Record types are reference types, like class declarations, but allow you to implement value-like behaviors.

Long answer: check out last month’s post for a deep dive on the topic.

Random facts that I assumed but am glad are in writing

  • Record parameters can’t use ref, out, or this, but can use in and params
  • Records can inherit from other records, but cannot inherit from classes (unless it’s object) and classes can’t inherit from records

It looks like, as long as the record derives from object, records do include a synthesized method that allows you to inspect all the members for that record:

bool PrintMembers(System.StringBuilder builder);

From the spec, they note that it’ll iterate through a records public field and property members and take the format of this.firstName = "Dave", this.lastName = "Brock" and so on. You can declare this method explicitly.

I think using PrintMembers will definitely come in handy when you are working with complex records/inheritance and don’t want to override ToString. Quick and easy—I like it.

(It looks like the synthesized ToString method that ships with records invokes the PrintMembers method.)

Wrap up

This was a quick post to show a few things I came across while reading the specification. Take a look yourself to check out details on how it works.

]]>
<![CDATA[ The .NET Stacks #11: Newtonsoft, async void, more with Jeremy Likness, more! ]]> https://www.daveabrock.com/2020/08/08/dotnet-stacks-11/ 608c3e3df4327a003ba2fe4d Fri, 07 Aug 2020 19:00:00 -0500 This week, we:

  • Talk a little about Newtonsoft.Json
  • Discuss why async void is bad
  • Give you a quick reminder to vote for the .NET Foundation
  • Continue our conversation with Jeremy Likness, Microsoft’s Sr. PM of .NET Data
  • Take a trip around the community

Newtonsoft.Json is still #1

If you scan NuGet statistics from time to time (what, this isn’t how you spend your Sunday evenings?), you’ll see Newtonsoft.Json, the popular JSON serializer/deserializer, still takes the #1 spot. It’s been downloaded almost 52.2 million times in the last six weeks. That’s almost five times more than Serilog, the #2 finisher.

While Microsoft scrapped this library in favor of System.Text.Json with .NET Core 3.0, this can’t be too surprising—after all, I’m about 90% sure Web Forms and jQuery will outlive me.

In this week’s Visual Studio Magazine this week, they also take a look. Why the new-ish library? Last year, Immo Landwerth gave three reasons:

  • Better performance for higher throughput in .NET Core
  • Remove the library dependency and potential versioning headaches
  • Provide an ASP.NET Core integration package

If I want to switch over, I’m really only concerned about the first bullet. The last two bullets appear to be Microsoft-speak for wanting complete ownership and control. If you’re like me, when I heard this I was thinking of something like: “OK, great, better performance, so I’ll switch over if that’s something that’s important to me.”

It’s a little more complicated than that. If you look at the migration guide, there’s a table of differences that illustrates how it isn’t a simple package download and that in many ways, developers will cling to Newtonsoft’s features even when there’s a newer and more performant option.

This isn’t to say Newtonsoft.Json is slow - it’s not, it’s very fast! According to Spencer Schneidenbach’s post on the topic, the .NET team could not refactor it without making breaking changes. I’d expect Newtonsoft to continue to be the choice for existing applications, unless extremely high performance is a concern to you.

Why async void is bad

C# is full of “just because you can do it, doesn’t mean you should” pitfalls. Across Twitter these days, a lot of folks in the C# community are talking about why you shouldn’t use async void methods.

This isn’t an original, new, or groundbreaking idea: Stephen Cleary asked you to not do this in 2013 and Phil Haack talked about these methods being a “scourge upon your code” in 2014.

So why even have this in the language, then? These methods exist to make async event handlers possible. Event handlers typically return void so async methods need to return void so you can have an async event handler. If you aren’t using it for this exact purpose, here’s how you will feel the pain:

  • When you use async void, thrown exceptions will crash the process. Any uncaught exceptions kill your application and you won’t have a decent call stack to work with, unless you do some silly workarounds
  • There’s no good way to notify the calling code that you’ve completed, since you aren’t returning a Task<T>
  • As a result, they are a nightmare to test

In short, don’t do this and do async Task<T> instead. Thank you for coming to my TED talk.

Last chance to vote for the .NET Foundation board

The voting for the .NET Foundation Board of Directors ends at 11:59pm on August 3 (PST, in the US). If you’re a member, you should have received an email. If you aren’t a member, it isn’t too late! (The donation level is suggested but should not be a barrier to joining.)

A lot of people (including myself) have been asking the Foundation for increased communication and transparency. They’re listening: this week, they released their first public budget and also wrote a “State of the Union” post.

More with Jeremy Likness, Sr. PM of .NET Data at Microsoft

Last week, we got to know Jeremy. This week, we’ll get into his work and discuss Entity Framework and .NET 5.

Jeremy Likness

To be honest, when I think of your role as the “Sr. PM of .NET Data,” I mostly think of “Sr. PM of Entity Framework.” Does anything else fall under that umbrella?

Entity Framework is certainly a large part of what I do, but the role is really about all of the ways that .NET developers interact with data. In addition to EF Core, I also deal with big data and manage .NET for Spark. I don’t just limit my scope to products I’m directly responsible for, but also focus on things like OData, gRPC and GraphQL for connecting to data over APIs.

I’m interested in improving the experience for the products we own at Microsoft as well as supporting community projects like Hot Chocolate, GraphQL .NET, and Dapper—just to name a few. I work closely with the team responsible for the SQL client. I work with Azure SQL and Cosmos DB.

I’m focused on an initiative to reorganize our documentation to have a landing page for .NET Data so that developers can find what they need with just a few clicks. It will focus on topics ranging from storage and NoSQL to relational databases, cache, APIs, and more.  I encourage anyone reading this to participate and share feedback using the GitHub issue. Our goal is to provide the best possible experience for .NET developers working with data, so you can imagine there is a lot of surface area to consider.

I know that when EF Core first rolled out, many features from EF 6 weren’t ported over (I’m thinking of ‘Always On’ encryption, for example). Coming into .NET 5, can we say that with progress over the last few years, EF Core can handle virtually any production need?

Specific to your example, I’m fairly certain that “Always On” encryption was an issue with lack of support in SqlClient and had nothing to do with Entity Framework. More generally, “any production need” covers a lot of ground.

Both EF 6 and EF Core are very mature and widely used and customers are employing both versions in production. For the common case of a data access layer for a web app, I believe that EF Core 5.0 will be “feature complete” for the majority of scenarios developers are looking to address. We are building support and documentation for end-to-end scenarios across all supported platforms, from Xamarin and Blazor to WinForms, WPF, and of course Azure.

Looking at the cloud, we are growing the set of features available for our Cosmos DB provider. The SQL Provider works with Azure SQL with nothing more than a connection string change. We are investigating how features like encryption and sharding work with EF Core to provide guidance where needed and consider features if gaps exist.

One caveat is that EF Core is not a micro-ORM and there is not a goal to support that use case. We don’t consider it the tool to solve all data access needs.

In fact, some customers use a lightweight solution like Dapper for queries and use EF Core for the create/update/delete side of things. This allows them to take advantage of the features that make EF Core shine, like advanced field-mapping with shadow properties, concurrency resolution, and the built-in change trackers and proxies, while having a go-to for lightweight data transfer objects and complex queries using the micro-ORM.

EF has definitely had its fair share of demanding customers about when features would be developed. This is just me speculating here, but previously I think EF wasn’t as community-focused as it could have been, and the community didn’t have a lot of insights about the team and how decisions were made.

We’re all seeing some nice community involvement with roadmaps, documentation, and community standups (in which someone commented about the tremendous productivity for a team so small!). Can you talk about how you are engaging with the community today, and the different ways we can learn, stay up to speed, offer feedback, and understand what’s coming?

I can’t speak to the full history of the EF team, but my impression is they have been very community-focused for quite some time. They’ve been posting weekly updates for several years now and you can see open discussions of issues dating back longer than that. One thing that resonated with me when I joined the team is their focus and desire to be completely open. Some members came over to Microsoft after being contributors, and most of the engineering team have worked on the same product for years—some over a decade!

We do everything in the open. Everything is tracked via our issues and/or discussions. We prioritize features based on community feedback, including upvotes and comments. We use issues and discussions for design decisions. The roadmap is publicly available and updated whenever priorities shift.

There are a few ways to keep track with what’s new and to interface directly with the team. First, most of the team is active on Twitter. You can use the alias to find profiles and ask questions directly. Second, we are active on GitHub. Perhaps the most useful link is the issue that is pinned to the issues list and updated weekly with progress. For major releases, keep an eye on the .NET blog  - it’s not just a list of what’s new, but also an opportunity to recognize our many community contributors. You might be amazed to see how many people are involved in code and documentation updates for EF Core 5.0!

The other way to keep in touch, and one I am very excited about, is our EF Core Community Standup. We host these every other Wednesday at 10am PT (17:00 UTC). During the standup, we highlight updates, community blog posts, projects, and contributions. It is a live stream and we encourage live Q&A so we can interact directly and answer the community questions that come up. We also take suggestions for shows and bring on guests when we can. You can find the standup schedules and watch previous recorded standups at live.dot.net.

Speaking of, while I can easily see what you’ve been delivering, what are some of the best upcoming feature improvements for EF Core you’re most proud of?

Many of the features are ready for preview. I’m excited that the team has tackled some of the most requested items for this release.

The many-to-many implementation that makes it possible to define relationships in a fluid way—for example, teams and individuals might have relationships, but down the road you want to add metadata about the relationship so you go from Teams having a Person, and Person belongs to Teams to a TeamsPerson domain object-that will be fully baked into EF Core 5.0.

Split includes provide flexibility over the way you handle sub-collections (like a subquery vs. separate SQL call to fetch the collection). There has been significant effort put into improving the experience for managing schemas via migrations, from additional command line options and support to enhanced documentation. A huge feature is table-per-type mapping, and another is filtered includes that provide far more control over the data sets you return.

Honestly I’m excited about everything that’s coming out. I’ve personally been working on end-to-end scenarios and will be very happy to see clear guidance for using EF Core in Xamarin, Blazor, WinForms, WPF, UWP, and other platforms in addition to the documentation we already have.

What have you discovered about things like EF Core that you weren’t aware of, before you joined the team?

I was surprised just how popular and useful the Cosmos DB provider is. I was skeptical because the SDK is great, so why put EF Core in front of it? After joining the team and talking to customers I realized there is a huge ecosystem that looks at EF Core as a common interface to data. They want to use it in mobile (Xamarin), and in serverless Azure Functions, and to talk to NoSQL databases in addition to relational databases.

It was really eye-opening to discover just how much development time is saved when we’re able to support these scenarios and provide that common strategy for data access. Just something as simple as managing multiple types in the same collection with automatic discriminators makes a huge difference.

Some scenarios like Xamarin have been very customer-driven. Adoption in Blazor is huge, too, so we’re focused on providing appropriate guidance for all platforms and closing gaps that might exist.

I also am really impressed with the community collaborations the team has built. I sat on a call with a third-party database team that is building a .NET LINQ provider for their own SDK (not part of EF Core) and the EF Core team was happy to provide guidance, share gotchas and lessons learned and offer support from our experience building EF Core.

The team has great relationships across the board—we have conversations with the Dapper team, for example, and on the mobile side are happy to suggest SQLite-Net when it makes sense as a native ORM solution for mobile. It’s really a team focused on solving problems in the best way possible, rather than trying to make their own product the solution for everything.

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 Blazor

🚀 .NET Core

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ The .NET Stacks #10: .NET 5 taking shape, how approachable is .NET, a talk with Jeremy Likness, more! ]]> https://www.daveabrock.com/2020/08/01/dotnet-stacks-10/ 608c3e3df4327a003ba2fe4c Fri, 31 Jul 2020 19:00:00 -0500 We had a tremendously busy week in .NET and it shows in this week’s edition.

This week, we:

  • Discuss the latest in .NET 5
  • Think about what it’s like to be a beginner in the .NET ecosystem
  • Kick off an interview with Jeremy Likness, Microsoft’s Sr. PM of .NET Data
  • Take a trip around the community

Also, say 👋🏻 to my parents! They just subscribed and this is the only line they’ll understand. 🤓

.NET 5 is close, so close

This week, we hit the preview 7 release for .NET 5 (see the notes for .NET 5 Preview 7, EF Core 5 Preview 7, and ASP.NET Core updates in the new preview, as well as Steven Toub’s post on .NET 5 performance improvements). The next release for .NET 5 will be Release 8, and then there will the two RCs (each with “go live” licenses). .NET 5 is slated for official release in early November.

What’s new in these previews? A few items of note:

  • Blazor WASM apps now target .NET 5
  • Blazor improvements in debugging, accessibility, and performance
  • In EF, a DbContextFactory, ability to clear the DbContext state, and transaction savepoints

It looks like single-file apps, the ability for .NET Core apps to be published and distributed as a single executable file, is coming in the next release (release 8).

How would you help a beginner get started on .NET?

Dustin Gorski published a great post last week called .NET for Beginners. It certainly isn’t a quick read, but I would recommend giving it a read when you get a few minutes. He discusses why many feel that .NET isn’t very approachable for newcomers. It opened my eyes: I was aware of most of his criticisms, but I’ve been dealing with them for so long I haven’t thought about how difficult it can be for newcomers coming into .NET for the first time.

From my perspective, what I agreed most with was:

  • How you have to know a lot to get started
  • How feature bloat really prevents beginners from knowing the best way to do something, when there are so many ways to do it (with varying performance impacts)
  • How the constant change and re-architecting of the platform makes it almost impossible to keep up

Think about it: if someone can to you and asked how to get started in .NET using a sample application, what would you say?

Would you start by explaining .NET Core and how it’s different than .NET Framework? And mention .NET Standard as a bridge between them? Or get them excited about .NET 5? Would you also bring up C# and F# and VB and the differences between them? What about Xamarin and Mono? What about a simple web site? Should they get started with Blazor? Traditional MVC? Razor Pages? One of my biggest takeaways from the piece: you have to know a lot of trivia to start developing in .NET.

Feature bloat is real, especially when it comes to C#. Take, for example, the promise of immutability in C# 9 with records (which I’ve written about). Records are similar to—but have immutable features over—structs. The exact reasons aren’t important here (that records are meant for immutability and prevent boilerplate code) but a valid complaint shows: why are we introducing a new construct into the language where we could have improved on what already exists? If someone wanted to use immutability in .NET, what would you say now? Use records in C# 9? Use readonly structs? What about a tuple class? Or use F#?

This is not to say Microsoft isn’t aware of all these challenges and isn’t trying to improve. They definitely are! You can head over to try.dot.net to run C# in the browser (with an in-browser tutorial). You can run code in the docs and in Microsoft Learn modules. C# 9 top-level statements take away that pesky Main method in console apps (yet another shameless plug).

Coming from the slow-moving days from the bloated System.Web namespace, I never thought I would be hearing (and agreeing) about gripes of .NET moving too fast. This is a testament to the hard work by the folks at Microsoft. Taking hints from .NET 5, I hope .NET allows developers to focus on what’s important, not try to be everything to everyone, and avoid feature bloat in C#. I think a continued focus on approachability positively impacts all .NET developers.

Dev Discussions: Jeremy Likness, Sr. PM of Data at Microsoft

If you’ve worked on Azure or .NET for awhile—like Azure Functions and Entity Framework—you’re probably familiar with Jeremy Likness, the senior PM for .NET data at Microsoft. He’s spoken at several conferences (remember those?), writes about various topics at his site, is a familiar face on various .NET-related videos, and, as of late, can be seen in the Entity Framework community standups (and more!).

I caught up with Jeremy to talk about his path to software development and all that’s going on with Entity Framework and .NET 5. After you get through this week’s interview, I think we can all agree his path to Microsoft is both inspiring and absolutely crazy.

Jeremy was very generous with his time! Because there’s so much to cover, I’ve split this into two different parts. This week, we get to know Jeremy. Next week, we’ll get into his work and discuss Entity Framework and .NET 5.

Jeremy Likness

As a developer at heart, what kind of projects have you been tinkering with?

My first consulting projects … involved XAML via WPF then Silverlight. I was a strong advocate of Silverlight because I had built some very complex web apps using JavaScript and managing them across browsers was a nightmare. … Silverlight was an implicit promise to write C# code that could run anywhere, and for many reasons probably more political than technical, the promise was not fulfilled.

I believe it has been realized with .NET Core and more specifically, Blazor. I resisted Blazor when it came out due to the existence of mature web frameworks available like Angular, React, Vue.js and Svelte, but colleagues convinced me to give it a spin and I was blown away by two things: first, how productive I could be and produce so much in a short period of time. Second, how many existing packages work with it and run inside the browser “as is.”

I’ve been building a lot of Blazor apps to explore different protocols, ways of interacting with data, application of the MVVM pattern and more. I’m working to publish some reference applications that show how to use Blazor with Entity Framework Core and have published seven articles.

I am also diving into expressions. I published a blog post about parsing JSON and turning it into an expression tree to evaluate. I’ve also written about the inverse: parsing an IQueryable LINQ expression to pull out the various pieces. Imagine you have a queryable source that you send to a component that then applies ordering, filtering, and sorting. How do you inspect the result to verify how the query was manipulated?

What is your one piece of programming advice?

After decades of building software my biggest piece of advice is that your first step shouldn’t be to find the library or framework or tool, but to solve the problem.

Too often people add frameworks or tools or patterns because they’re recommended, rather than actually determining if they add value. Many times the overhead outweighs the benefit. I’m often told, “That solution isn’t right because you don’t have a business logic layer.” My response is, “So?” It’s not that I don’t see value in that layer for certain implementations, but that it’s not always necessary.

Have you worked on that project that forced an architecture so that one change involves updating five projects and the majority of them are just default “pass through” implementations? I am a fan of solving for the solution, and only then if you find some code is repeated, refactor. Don’t over-engineer or complicate. One of my favorite starting points for a solution is to consider, “What is the ideal way I’d like to code for this?”

This is just a small portion of my interview with Jeremy. There is so much more at my site, including Jeremy’s unusual path and a crazy story about his interview!

🌎 Last week in the .NET world

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 Blazor

🚀 .NET Core

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Dev Discussions - Jeremy Likness (2 of 2) ]]> https://www.daveabrock.com/2020/08/01/dev-discussions-jeremy-likness-2/ 608c3e3df4327a003ba2fe4b Fri, 31 Jul 2020 19:00:00 -0500 This is the full interview from my discussion with Jeremy Likness in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away!

If you’ve worked on Azure or .NET for awhile—like Azure Functions and Entity Framework—you’re probably familiar with Jeremy Likness, the senior PM for .NET data at Microsoft. He’s spoken at several conferences (remember those?), writes about various topics at his site, is a familiar face on various .NET-related videos, and, as of late, can be seen in the Entity Framework community standups (and more!).

I caught up with Jeremy to talk about his path to software development and all that’s going on with Entity Framework and .NET 5.

Last week, we got to know Jeremy. Now, we’ll get into his work and discuss Entity Framework and .NET 5.

We go through a lot of resources in this post. I’ve included them all in a list at the bottom of this article.

Jeremy Likness

To be honest, when I think of your role as the “Sr. PM of .NET Data,” I mostly think of “Sr. PM of Entity Framework.” Does anything else fall under that umbrella?

Entity Framework is certainly a large part of what I do, but the role is really about all of the ways that .NET developers interact with data. In addition to EF Core, I also deal with big data and manage .NET for Spark. I don’t just limit my scope to products I’m directly responsible for, but also focus on things like OData, gRPC and GraphQL for connecting to data over APIs.

I’m interested in improving the experience for the products we own at Microsoft as well as supporting community projects like Hot Chocolate, GraphQL .NET, and Dapper—just to name a few. I work closely with the team responsible for the SQL client. I work with Azure SQL and Cosmos DB.

I’m focused on an initiative to reorganize our documentation to have a landing page for .NET Data so that developers can find what they need with just a few clicks. It will focus on topics ranging from storage and NoSQL to relational databases, cache, APIs, and more.  I encourage anyone reading this to participate and share feedback using the GitHub issue. Our goal is to provide the best possible experience for .NET developers working with data, so you can imagine there is a lot of surface area to consider.

That’s actually something that applies to the EF Core team as well. They are branded based on that product, but are really passionate about all things data-related. The team “owns” System.Data.

I know that when EF Core first rolled out, many features from EF 6 weren’t ported over (I’m thinking of ‘Always On’ encryption, for example). Coming into .NET 5, can we say that with progress over the last few years, EF Core can handle virtually any production need?

Specific to your example, I’m fairly certain that “Always On” encryption was an issue with lack of support in SqlClient and had nothing to do with Entity Framework. More generally, “any production need” covers a lot of ground.

Both EF 6 and EF Core are very mature and widely used and customers are employing both versions in production. For the common case of a data access layer for a web app, I believe that EF Core 5.0 will be “feature complete” for the majority of scenarios developers are looking to address. We are building support and documentation for end-to-end scenarios across all supported platforms, from Xamarin and Blazor to WinForms, WPF, and of course Azure.

Looking at the cloud, we are growing the set of features available for our Cosmos DB provider. The SQL Provider works with Azure SQL with nothing more than a connection string change. We are investigating how features like encryption and sharding work with EF Core to provide guidance where needed and consider features if gaps exist.

One caveat is that EF Core is not a micro-ORM and there is not a goal to support that use case. We don’t consider it the tool to solve all data access needs.

In fact, some customers use a lightweight solution like Dapper for queries and use EF Core for the create/update/delete side of things. This allows them to take advantage of the features that make EF Core shine, like advanced field-mapping with shadow properties, concurrency resolution, and the built-in change trackers and proxies, while having a go-to for lightweight data transfer objects and complex queries using the micro-ORM.

EF has definitely had its fair share of demanding customers about when features would be developed. This is just me speculating here, but previously I think EF wasn’t as community-focused as it could have been, and the community didn’t have a lot of insights about the team and how decisions were made. We’re all seeing some nice community involvement with roadmaps, documentation, and community standups (in which someone commented about the tremendous productivity for a team so small!). Can you talk about how you are engaging with the community today, and the different ways we can learn, stay up to speed, offer feedback, and understand what’s coming?

I can’t speak to the full history of the EF team, but my impression is they have been very community-focused for quite some time. They’ve been posting weekly updates for several years now and you can see open discussions of issues dating back longer than that. One thing that resonated with me when I joined the team is their focus and desire to be completely open. Some members came over to Microsoft after being contributors, and most of the engineering team have worked on the same product for years—some over a decade!

We do everything in the open. Everything is tracked via our issues and/or discussions. We prioritize features based on community feedback, including upvotes and comments. We use issues and discussions for design decisions. The roadmap is publicly available and updated whenever priorities shift.

There are a few ways to keep track with what’s new and to interface directly with the team. First, most of the team is active on Twitter. You can use the alias to find profiles and ask questions directly. Second, we are active on GitHub. Perhaps the most useful link is the issue that is pinned to the issues list and updated weekly with progress. For major releases, keep an eye on the .NET blog  - it’s not just a list of what’s new, but also an opportunity to recognize our many community contributors. You might be amazed to see how many people are involved in code and documentation updates for EF Core 5.0!

The other way to keep in touch, and one I am very excited about, is our EF Core Community Standup. We host these every other Wednesday at 10am PT (17:00 UTC). During the standup, we highlight updates, community blog posts, projects, and contributions. It is a live stream and we encourage live Q&A so we can interact directly and answer the community questions that come up. We also take suggestions for shows and bring on guests when we can. You can find the standup schedules and watch previous recorded standups at live.dot.net.

I’ve been noticing how, in preparation for .NET 5, the EF Core preview releases seem to be in lock-step with ASP.NET Core. Is this intentional? Are you working on a more predictable release cycle and coordination with other teams?

Yes. We have intentionally aligned EF Core releases with .NET Core releases. ASP.NET Core follows the same cadence. Due to the way all three interoperate, it just makes sense to align them together.

That way we can build a release that potentially takes advantage of new runtime, framework, and SDK features that is released the same time, and likewise ASP.NET Core for example can pull in our releases for things like templates that provide identity management that relies on EF Core. Finally, it provides a consistent cadence for messaging. It makes it very clear when customers can acquire and test preview releases and when to expect the final release.

Speaking of, while I can easily see what you’ve been delivering, what are some of the best upcoming feature improvements for EF Core you’re most proud of?

Many of the features are ready for preview. I’m excited that the team has tackled some of the most requested items for this release.

The many-to-many implementation that makes it possible to define relationships in a fluid way—for example, teams and individuals might have relationships, but down the road you want to add metadata about the relationship so you go from Teams having a Person, and Person belongs to Teams to a TeamsPerson domain object-that will be fully baked into EF Core 5.0.

Split includes provide flexibility over the way you handle sub-collections (like a subquery vs. separate SQL call to fetch the collection). There has been significant effort put into improving the experience for managing schemas via migrations, from additional command line options and support to enhanced documentation. A huge feature is table-per-type mapping, and another is filtered includes that provide far more control over the data sets you return.

Honestly I’m excited about everything that’s coming out. I’ve personally been working on end-to-end scenarios and will be very happy to see clear guidance for using EF Core in Xamarin, Blazor, WinForms, WPF, UWP, and other platforms in addition to the documentation we already have.

What have you discovered about things like EF Core that you weren’t aware of, before you joined the team?

I was surprised just how popular and useful the Cosmos DB provider is. I was skeptical because the SDK is great, so why put EF Core in front of it? After joining the team and talking to customers I realized there is a huge ecosystem that looks at EF Core as a common interface to data. They want to use it in mobile (Xamarin), and in serverless Azure Functions, and to talk to NoSQL databases in addition to relational databases.

It was really eye-opening to discover just how much development time is saved when we’re able to support these scenarios and provide that common strategy for data access. Just something as simple as managing multiple types in the same collection with automatic discriminators makes a huge difference.

Some scenarios like Xamarin have been very customer-driven. Adoption in Blazor is huge, too, so we’re focused on providing appropriate guidance for all platforms and closing gaps that might exist.

I also am really impressed with the community collaborations the team has built. I sat on a call with a third-party database team that is building a .NET LINQ provider for their own SDK (not part of EF Core) and the EF Core team was happy to provide guidance, share gotchas and lessons learned and offer support from our experience building EF Core.

The team has great relationships across the board—we have conversations with the Dapper team, for example, and on the mobile side are happy to suggest SQLite-Net when it makes sense as a native ORM solution for mobile. It’s really a team focused on solving problems in the best way possible, rather than trying to make their own product the solution for everything.

Resources

You can follow Jeremy Likness at his site and on Twitter.

]]>
<![CDATA[ Talk with Groot using the Microsoft Bot Framework and Azure sentiment analysis ]]> https://www.daveabrock.com/2020/07/28/azure-bot-service-cognitive-services/ 608c3e3df4327a003ba2fe4a Mon, 27 Jul 2020 19:00:00 -0500 Like any good adventure, this one started on Twitter.

My friend (and friend of the newsletter) Shahed Chowdhuri took his lunch break to create a simple chat bot using the Microsoft Bot Framework. A user enters something, and it responds with I am Groot.

If you aren’t familiar with Groot, he is a tree-like being from the Marvel universe who only responds with I am Groot - leaving much up to interpretation. Here’s an idea: how about we change how Groot responds based on the sentiment of what a user types? For example, if a user types something happy, or sad, can we respond in kind? Luckily, we can have some fun with Azure’s Text Analytics Service to analyze sentiments.

From here on out, however, we will address him at Gruut*. Why?

*please don’t sue us

In this post, we’ll do the following:

  • Create a chatbot using the Microsoft Bot Framework
  • Integrate the Azure Text Analytics service to detect a user’s sentiment
  • Learn how to debug bots
  • Safely store our credentials to access our Azure services

Many thanks to Shahed for the inspiration for this post.

This post covers the following topics.

Before you get started

While there are multiple ways to develop with the Bot Framework, we’ll use Visual Studio tooling. So before you get started with this tutorial, you’ll need to install Bot Framework templates and the Bot Framework Emulator.

Install Bot Framework SDK templates

To make project generation easier, you’ll need to install the Bot Framework v4 SDK Templates for Visual Studio. In your Visual Studio, go to Extensions > Manage Extensions, and search for bot. It should be the first result.

bot framework extension

Alternatively, you can find the direct download at this location.

Install Bot Framework Emulator

To test and debug your bots (either locally or remotely), you’ll need to install the Bot Framework Emulator. Head on over to the GitHub releases to install the emulator for your specific environment.

Create a Text Analytics Service

Now, we can head out to Azure to create a Text Analytics instance. Assuming you have an Azure account (if not, you can go to the Azure site to sign up and get free credits), head on over to the Azure Portal at portal.azure.com.

In the search box, enter Text Analytics, and select Text Analytics from the Marketplace section.

bot framework extension

Enter a name, select your subscription, a location, pricing tier (the Free tier should be fine for this tutorial), and a resource group, then click Create.

bot framework extension

After the deployment completes, click Go to resource. Then, click Keys and Endpoint. You’ll need to grab a key (Key 1 or Key 2 is fine) and the endpoint for your application. Copy these values somewhere, like a text file.

echo bot template

Excellent! Let’s move on to Visual Studio to create our chatbot.

Create your bot project

After you install the Bot Framework v4 Templates for Visual Studio, create a new project as you typically would.

From the Project Types drop-down, select AI Bots and select the Echo Bot template. This will give us the basic functionality we need (since Gruut does a lot of echoing).

echo bot template

Introducing your EchoBot

When your project loads, navigate over to EchoBot.cs (in your Bots folder). You’ll see the EchoBot implements the Microsoft.Bot.Builder.ActivityHandler interface, and has two methods implemented for you.

  • OnMessageActivityAsync - this is where you’ll include code specific to your conversational logic. We’ll override this and do our work here.
  • OnMembersAddedAsync - this is where you’ll provide logic when new members join the conversations (like welcome logic). You can remove this method if you wish, since we aren’t using it.

We’ll be working with the Azure.AI.TextAnalytics package. You can install it now from the NuGet Package Manager or get Visual Studio assistance to install when you get errors. Your choice. 😎

First pass: getting it working

Now, still in EchoBot.cs, find the Text Analytics key and endpoint you copied when you created your instance in Azure. At the beginning of your class, declare a credentials and endpoint static class variable:

private static readonly AzureKeyCredential credentials = new AzureKeyCredential("<your key>");
private static readonly Uri endpoint = new Uri("<your endpoint>");

Hard-coding your credentials is reckless. Don’t do this for any apps that have actual users. We will fix this soon—we want to get this working first.

Moving on to our OnMessageActivityAsync method, we’ll first get the text the user entered. We get this by using our turnContext that is passed in, that’s an ITurnContext<MessageActivity>. The Activity.Text gets the contents of the sent message as a string.

string userInput = turnContext.Activity.Text;

Now, we’re ready to work with our Text Analytics service. We will first get an instance of the Text Analytics client, passing in our credentials:

var client = new TextAnalyticsClient(endpoint, credentials);

Then, we want to call the client’s AnalyzeSentiment method. When we call this, we get a TextAnalytics.DocumentSentiment back, with tons of data to work with.

We can work with confidence scores, a sentiment for each sentence, or any warnings that occur. For our purposes, we want to work with the Sentiment property. This property has a TextSentiment, an enum with valid values as Mixed, Negative, Neutral, or Positive. This should be fine for our purposes. Then, we can make a decision on what to send back to the user.

var sentiment = client.AnalyzeSentiment(userInput).Value.Sentiment;

Now, we can write a local function in C# (thanks for the inspiration, David Pine) that leverages switch expressions. We send back specific responses for Positive, Negative, and Neutral results. If we get anything else back, we have a fallback.

static string GetReplyText(TextSentiment sentiment) => sentiment switch
{
    TextSentiment.Positive => "I am Gruut.",
    TextSentiment.Negative => "I AM GRUUUUUTTT!!",
    TextSentiment.Neutral => "I am Gruut?",
    _ => "I. AM. GRUUUUUT"
};

Now, all that’s left is to invoke the GetReplyText function and call SendActivityAsync to send the message back to the user.

var replyText = GetReplyText(sentiment);
await turnContext.SendActivityAsync(MessageFactory.Text(replyText, replyText), cancellationToken);

That should do it! For your reference, here’s the entire class.

using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using Azure;
using Azure.AI.TextAnalytics;
using Microsoft.Bot.Builder;
using Microsoft.Bot.Schema;

namespace GruutChatbot.Bots
{
    public class EchoBot : ActivityHandler
    {
        private static readonly AzureKeyCredential credentials = new AzureKeyCredential("<my key>");
        private static readonly Uri endpoint = new Uri("<my endpoint>");

        protected override async Task OnMessageActivityAsync(ITurnContext<IMessageActivity> turnContext, CancellationToken cancellationToken)
        {
            var client = new TextAnalyticsClient(endpoint, credentials);
            string userInput = turnContext.Activity.Text;
            var sentiment = client.AnalyzeSentiment(userInput).Value.Sentiment;

            static string GetReplyText(TextSentiment sentiment) => sentiment switch
            {
                TextSentiment.Positive => "I am Gruut.",
                TextSentiment.Negative => "I AM GRUUUUUTTT!!",
                TextSentiment.Neutral => "I am Gruut?",
                _ => "I. AM. GRUUUUUT"
            };

            var replyText = GetReplyText(sentiment);
            await turnContext.SendActivityAsync(MessageFactory.Text(replyText, replyText), cancellationToken);
        }
    }
}

Let’s try it out

Let’s see how this works. Before we debug this, set a breakpoint at the declaration of GetReplyText. Now, start debugging your app in Visual Studio. A browser will launch—take note of the port number after localhost (https://localhost:xxxx).

Finally, we can now work with the Bot Framework Emulator. At the initial screen, click Open Bot. For your bot URL, enter http://localhost:xxxx/api/messages and replace xxxx with your port number you are using. Then, click Connect.

echo bot template

You are connected! Enter some text to try it out and hit your breakpoint. I’ll type I’m really mad.

With my breakpoint hit, hover over sentiment and you’ll see it comes back as Negative.

echo bot template

Great! Hit Continue in Visual Studio, go back to the Bot Framework Emulator, and you’ll see that Gruut matches your anger.

echo bot template

Excellent! This is great—however, before we ship an app with hard-coded credentials in the source code, we should probably clean that up. Let’s do that now.

Second pass: working with credentials safely

Clearly, we do not want to hard-code our credentials. Instead, here’s what we’ll do:

  • Store our key and endpoint in the Azure Key Vault
  • In Azure Active Directory, create a new app registration
  • Give this new registration rights to access the Key Vault
  • In our app, use the configuration provider to access key vault using our provided client ID and secret

Of course, there are even more, enterprise-y ways to do it, like managed identities—like everything else, it’s a balance of security and overkill for what we’re doing.

Before we proceed, you’ll need to create a Key Vault or use an existing one. For details on how to create a Key Vault, check out the docs and come back when you’re done. I’ll wait.

Add secrets to Key Vault

From your Key Vault, click Secrets and then Generate/Import to create two secrets: AzureKeyCredential and CognitiveServicesEndpoint I won’t be showing you my setup for security reasons, but it’s pretty straight-forward.

Set up app registration in Azure Active Directory

We’re now ready to set up our app registration. Head on over to Azure Active Directory (I just use the search box), and click App Registrations, then New registration. Give your registration the name and accept the default supported account type, then click Register.

echo bot template

Copy the Application (client) ID, as we will need it later.

Create client secret

Of course, this registration does nothing by its own, so we’ll need to create a client secret so our app can know about it. While still in the App Registrations section, click Certificates & secrets, then click New client secret.

Enter a name, leave the default expiration policy, and click Add.

echo bot template

You should see your new client secret. Copy this value somewhere—we will also need it soon.

echo bot template

Give your app rights to the Key Vault

Now, we’re ready to head over to the Key Vault to give our chatbot access to use it via the app registration. Once you get to the Key Vault, click Access policies, then + Add Access Policy.

Accept the defaults other than the following, then click Add:

  • Secret permissions - Get, List
  • Select principal - Select the client secret you used earlier (in my case, gruut-bot)

We’re done with Azure. Now, all we need to do is update our app.

Update your app

In your appsettings.json file, update it to include the name of your key vault (whatever is before .vault.azure.net) and the client ID and secret you just copied.

{
  "KeyVault": {
    "Name": "my-key-vault-name",
    "ClientId": "my-client-id",
    "ClientSecret": "my-client-secret"
  }
}

In your Program.cs file, in CreateHostBuilder, we need to update it to grab the values in question once the app starts.

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .ConfigureAppConfiguration((context, config) =>
        {
            var builtConfig = config.Build();
            config.AddAzureKeyVault(
                $"https://{builtConfig["KeyVault:Name"]}.vault.azure.net",
                builtConfig["KeyVault:ClientId"],
                builtConfig["KeyVault:ClientSecret"]);
        })
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup<Startup>();
        });

Now, if you go back to EchoBot.cs, remove the hard-coded strings and replace it with a constructor that references the IConfiguration interface using some dependency injection. This allows us to access our configuration properties.

private readonly IConfiguration _configuration;

public EchoBot(IConfiguration configuration)
{
    _configuration = configuration;
}

Then, all you need to do is change the TextAnalyticsClient parameters to reference your configuration.

var client = new TextAnalyticsClient(
             new Uri(_configuration["CognitiveServicesEndpoint"]),
             new AzureKeyCredential(_configuration["AzureKeyCredential"]));

Run your app again and you’ll see it should be working the same as before—except this time, our Text Analytics credentials are stored away in our Key Vault.

Wrapping up

In this post, we got our feet wet with the Microsoft Bot Framework. We incorporated Azure Cognitive Services to detect the sentiment of a user, and also worked on safeguarding our credentials using the Azure Key Vault and Azure Active Directory.

You can access the source code at Shahed’s repo, and PRs/GIFs (or both) are always welcome.

]]>
<![CDATA[ The .NET Stacks #9: Project Coyote, new Razor editor, and more! ]]> https://www.daveabrock.com/2020/07/25/dotnet-stacks-9/ 608c3e3df4327a003ba2fe49 Fri, 24 Jul 2020 19:00:00 -0500 So, what do you think? I changed newsletter providers and got a new look (thanks to @comfycoder for the logo help). I hope you like it. It works on my machine, and I hope it works on yours too. 😎

Any suggestions? Hit me up on Twitter or just reply to this e-mail. I’m all ears on what content you’d like—I want to make this the best part of your Monday!

This week we’re talking about debugging async with Coyote, an exciting new Razor editor in Visual Studio, a fun look at unit testing, and more!

Debug async issues faster with Coyote

Isn’t async programming fun? It’s so much fun, you can probably relate to this. (If you’ve spent months on a single concurrency issue, it might also make you cry.)

All jokes aside, it’s a concurrent world and we just live in it. Our customers demand it. In this microservice-friendly, cloud-first world, we want things fast and efficient at the lowest cost possible. In the cloud, minimizing cost requires high throughput, which requires high concurrency. If you haven’t pulled your hair out from a concurrency/threading issue, to that I say: welcome to our field. I hope you had a nice graduation party.

We know this is hard, and that’s why we rely on the .NET framework to handle most of the magic for us. With the Task Parallel Library (TPL) and async/await, we don’t have to worry about thread scheduling, state management, or other hard low-level details.

However, we also get flexibility with APIs like Task.Run that give us the power to queue work—if we don’t know exactly what we’re doing, we can get in loads of trouble. Even with high code coverage, powerful integration testing, and stress testing, we still know we’re not too far from a concurrency issue that will cost our customers (and our employers) tons of time and money.

Meet Coyote, a set of tools that help you find these seemingly impossible-to-reproduce bugs quicker. (It is an evolution of the P# project, and is moving out of research mode.)

How does this work? After you write code that uses the Coyote programming model—the familiar task-based model in in preview—you can run coyote test, where the magic begins. When you run this, Coyote manages task scheduling and explores all the complexities of your code. Coyote runs tests repeatedly with “different scheduling choices” each time. If a bug is found, Coyote reports a reproducible bug trace, which can be replayed until you find the bug.

Coyote has been put through its paces for various Azure services and promises integration with unit testing frameworks and minimal overhead. Is it worth it for internal CRUD apps? Maybe not. But if you are operating at tremendous scale where concurrency issues keep you up at night, it might be a game-changer.

There’s so much to take in! Check out the resources for more details on Project Coyote.

New Razor editor for Visual Studio in preview

It seems like we’re constantly getting news of a new Visual Studio preview feature, and this week is no exception. If you download Visual Studio 2019 16.7 Preview 4 (got all that?) you can use a brand new Razor editor to assist your local development with Blazor, MVC, and Razor Pages.

Most of us know Razor but if you don’t: it’s a templating language (using HTML and, most frequently, C#) you use to define how you dynamically render content in your ASP.NET app. With Blazor, you can reuse UI components using .razor files. Razor in Visual Studio comes with IntelliSense, completions, and the like. Why a new editor, then?

Today, Visual Studio does a lot of tricks behind the scenes with projection buffers for the different languages it supports. For example, if you have C# and HTML in a Razor file, Visual Studio reads the Razor file from disk, sends the code to a Razor buffer, a C# buffer, and an HTML buffer. Finally, each buffer uses its own independent language service for each language you use. Oof.

Why should you care? If you’ve signed a lifetime contract to use Visual Studio and you’re a team of one, fine—but if you want extensibility with Codespaces or Live Share in VS, you’re in trouble. All these dependencies require a lot of work to coordinate and can hinder the speed of future improvements.

To address this, Microsoft has been working on a Razor Language Server that implements the features you know and love through a common protocol. Wherever the extension exists (VS, VS Code, wherever), it coordinates with the Razor Language Server.

This solution is currently used for Razor support in VS Code, and will be used for Codespaces and LiveShare for Visual Studio. And now, you can try the VS implementation today (in preview). It has some rough edges, but I look forward to see how it performs. I do know Razor mostly works well but can exhibit some inconsistent performance. Will this help?

🌎 Last week in the .NET world

I’m trying a new format this week so you can sift through the links easier.

🔥 The Top 3

📢 Announcements

📅 Community and events

😎 Blazor

🚀 .NET Core

⛅ The cloud

📔 Languages

🔧 Tools

📱 Xamarin

🎤 Podcasts

🎥 Videos

]]>
<![CDATA[ Dev Discussions - Jeremy Likness (1 of 2) ]]> https://www.daveabrock.com/2020/07/25/dev-discussions-jeremy-likness-1/ 608c3e3df4327a003ba2fe48 Fri, 24 Jul 2020 19:00:00 -0500 This is the full interview from my discussion with Jeremy Likness in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away!

If you’ve worked on Azure or .NET for awhile—like Azure Functions and Entity Framework—you’re probably familiar with Jeremy Likness, the senior PM for .NET data at Microsoft. He’s spoken at several conferences (remember those?), writes about various topics at his site, is a familiar face on various .NET-related videos, and, as of late, can be seen in the Entity Framework community standups (and more!).

I caught up with Jeremy to talk about his path to software development and all that’s going on with Entity Framework and .NET 5. After you get through this week’s interview, I think we can all agree his path to Microsoft is both inspiring and absolutely crazy.

Jeremy was very generous with his time! Because there’s so much to cover, I’ve split this into two different parts. This week, we get to know Jeremy. Next week, we’ll get into his work and discuss Entity Framework and .NET 5.

Jeremy Likness

Can you walk me through your path to Microsoft?

My desire to be a programmer started when I was 7 years old back in 1981. At home and bored, I found a manual for our TI-99/4A computer that ran at a blazing 3 MHz (nope, not GHz) with 16 kilobytes (nope, not a typo) of memory. I typed in a BASIC program, ran it, and was blown away. It felt like magic and I realized that this was something I could master.

I pursued a Computer Science degree in college but ended up dropping out for mental health reasons. I believed what everyone told me: that without a degree I might as well give up on a career, so I focused on fast food, retail, hospitality, and even worked in a pool hall for a year.

My personal break came when I interviewed for a job at an insurance company and mentioned that I speak Spanish. They hired me as a Spanish-speaking claims representative. I was always competitive so I tried to close the most claims in the department, but the “green screen” software running on an AS/400 (now called iSeries) would often crash.

We’d call tech support, and they’d come over and basically break into a command line and restart the app. I decided to close the loop and fix it myself, and when IT figured out what I was doing, they called me up. I expected to get disciplined but instead was offered a job in IT. It was a night shift position mostly focused on reloading printer cartridges in refrigerator-sized printers, but that gave me time to study the language they used, RPG, and eventually earn a spot on the programming team.

I worked at various positions and had job titles ranging from Technical Team Lead to Director of IT. I even owned my own company for a few years. It was an online fitness business that I scaled up doing online coaching, delivering seminars and selling CDs and books.

I sold the company when I had the opportunity to join a startup focused on custom hotspot portals back in 2006. My first day there, the CEO took me to Ikea, bought a desk then dropped me off at a loft in downtown Atlanta. “Find a spot you like, assemble the desk and get to work.” The company later pivoted to mobile device management.

After five years of startup mode working 12-hour days and most weekends, I had to make a change. I left the company with nothing vested and they were acquired a few years later for $1.5 billion. I was very proud of the infrastructure and product I helped build to position them for the acquisition, but also extremely happy to have my health, time, and sanity back.

I went into consulting at a company named Wintellect that was entirely remote. I went from not being home for weeks at a time to being home all the time, and working over 80 hours a week to 40-hour weeks with bonuses if I had to work more. This strengthened my relationship with my wife and daughter and taught me the importance of balance. I was invited by another local Atlanta company to build their application consulting practice and joined as their director of application development. Over three years, I built up a multi-million dollar practice and had the opportunity to hire and mentor some amazing developers.

I was very happy in that role and regularly ignored recruiting emails, but one came in from Microsoft and I couldn’t ignore it. I watched the positive cultural transformation under Satya Nadella over social media and just a week earlier had told my wife, “Some great things are happening at Microsoft.” The timing was perfect, so I thought, “It can’t hurt to try.”

I replied and started a series of interviews. I waited a few weeks to receive the feedback that I interviewed well, but they were looking for someone with a lot more open source contributions. My work at the time was largely proprietary enterprise line of business apps. I decided it wasn’t meant to be and moved on… only to receive a call a week later about another role. I interviewed for that role and felt good about it, but didn’t hear back for several weeks. I was finally told that my interview went great but due to budget adjustments, the position was no longer open.

Clearly I was not meant to work at Microsoft, right? Then I received a third call about a new developer advocate role. The first call was all about why I thought I would be a good fit and why I was willing to leave a role as Director of Application Development to become an individual contributor. I explained it was always a dream of mine to work at Microsoft and that the role felt custom fit for me – after all, I was already advocating to raise awareness about our app dev services at my current company. “Great, we’ll schedule an interview.”

At the time I planned a trip to Italy with my wife and some close friends to celebrate our 20-year anniversary. Microsoft wanted two days to interview me. I reported directly to the CEO and knew it would be strange to request two extra days off at the last minute. I had one extra day after the trip to adjust to the time zone change, so I asked the recruiter if they could squeeze the interviews into a single day and make it happen on that specific day. To my surprise, they said, “Yes.”

I flew back to Atlanta from Italy, helped my wife get to her shuttle to head home, then went back into the airport to fly to Seattle. I landed at midnight, was in my hotel by 1 am and back up at 5 am to start a day of interviews. It was the best decision I ever made because I landed the role! I was renewed for my 8th year as an MVP just 8 days before my start date on July 10th, 2017.

I wrote a fun post spanning my career and also wrote a post about my recent transition to the PM role.

As a developer at heart (your blog is called “Developer for Life”, after all) what kind of projects have you been tinkering with?

My first consulting projects at Wintellect involved XAML via WPF then Silverlight. I was a strong advocate of Silverlight because I had built some very complex web apps using JavaScript and managing them across browsers was a nightmare. It is a common misconception that JavaScript was “different”, when in reality it was the implementation of the HTML interface or DOM that caused us grief in the earlier days.

Silverlight raised productivity orders of magnitude and XAML made it possible to work in parallel with designers. It was a game-changer and I dedicated time to learn it inside and out. Silverlight was an implicit promise to write C# code that could run anywhere, and for many reasons probably more political than technical, the promise was not fulfilled.

I believe it has been realized with .NET Core and more specifically, Blazor.

I resisted Blazor when it came out due to the existence of mature web frameworks available like Angular, React, Vue.js and Svelte, but colleagues convinced me to give it a spin and I was blown away by two things: first, how productive I could be and produce so much in a short period of time. Second, how many existing packages work with it and run inside the browser “as is.”

Want to build a Markdown engine? No worries, take your pick of libraries to pull in. For personal learning (and as a result, education for others) I’ve been building a lot of Blazor apps to explore different protocols, ways of interacting with data, application of the MVVM pattern and more. I’m working to publish some reference applications that show how to use Blazor with Entity Framework Core and have published seven articles.

I am also diving into expressions. I published a blog post about parsing JSON and turning it into an expression tree to evaluate. I’ve also written about the inverse: parsing an IQueryable LINQ expression to pull out the various pieces. Imagine you have a queryable source that you send to a component that then applies ordering, filtering, and sorting. How do you inspect the result to verify how the query was manipulated?

My other projects are related to products I work with. I’m working on reference samples for using EF Core with Blazor, WPF, and WinForms. I’m also starting to explore big data and will be looking at projects that use .NET for Spark.

What is your one piece of programming advice?

After decades of building software my biggest piece of advice is that your first step shouldn’t be to find the library or framework or tool, but to solve the problem.

Too often people add frameworks or tools or patterns because they’re recommended, rather than actually determining if they add value. Many times the overhead outweighs the benefit. I’m often told, “That solution isn’t right because you don’t have a business logic layer.” My response is, “So?” It’s not that I don’t see value in that layer for certain implementations, but that it’s not always necessary.

Have you worked on that project that forced an architecture so that one change involves updating five projects and the majority of them are just default “pass through” implementations? I am a fan of solving for the solution, and only then if you find some code is repeated, refactor. Don’t over-engineer or complicate. One of my favorite starting points for a solution is to consider, “What is the ideal way I’d like to code for this?”

For example, I am building a Blazor MVVM implementation. For testing the filtering and sorting that the view model applies, I would love to pass an IQueryable<Entity> then do something like this…

Assert.That(query.CalledMethod(nameof(Enumerable.Take)).WithValue(5))

…to verify that the view model applied the right extensions for paging. I start with what is easy to read and understand, then work backwards to provide an API that satisfies it.

As another example, when you raise property change notifications, you often have dependencies. If I have Quantity and CostPerItem, then TotalCost changes whenever Quantity or CostPerItem does.

I’d love to say …

TotalCost.DependsOn(CostPerItem).AndAlso(Quantity)

… so I start with that, then build the appropriate interfaces to make it work. That in my opinion leads to readable, maintainable code.

You can follow Jeremy Likness at his site and on Twitter.

]]>
<![CDATA[ C# 9: Putting it all together with a scavenger hunt ]]> https://www.daveabrock.com/2020/07/21/c-sharp-9-scavenger-hunt/ 608c3e3df4327a003ba2fe47 Mon, 20 Jul 2020 19:00:00 -0500 Here we are, friends: our last post in our C# 9 deep dive series. We’ve discussed a lot of topics, so I thought it’d be fun to bundle all we’ve learned into a single post. (You can think of this as a CliffsNotes of the 10,000 or so words I’ve written about C# 9 so far.)

So, let’s go on a scavenger hunt! (I know I have readers from different countries and customs—where I’m from, it’s a fun game where you have to run around outside and find a number of miscellaneous things from a list). So this is how us nerds do it: inside, with air conditioning, geeking out about features that aren’t fully released yet.

I’ll be focusing on the how, not the why—you can see all my other posts for the full details on the concepts.

This is the sixth—and last!—post on my C# 9 deep dive series.

This post covers the following topics.

Our scavenger hunt: the list

Here’s our list for our scavenger hunt. We’ll check it off as we go!

(Target typing ?? and ?: and data member simplification have been removed from the list, as they do not compile in today’s previews.)

  • Init-only properties
  • Init accessors and readonly fields
  • Records
  • with expressions
  • Value-based equality
  • Positional records
  • with expressions and inheritance
  • Value-based equality and inheritance
  • Top-level programs
  • Improved pattern matching
  • Simple type patterns
  • Relational patterns
  • Logical patterns
  • Improved target typing
  • Target-typed new expressions
  • Covariant returns

The low-hanging fruit: top-level programs

For a quick win, let’s make a top-level program. If you remember, this allows us to run a program without that pesky Main method.

using System;

Console.WriteLine("I am Groot.");

Well, that was easy! Knock one off the list.

  • Init-only properties
  • Init accessors and readonly fields
  • Records
  • with expressions
  • Value-based equality
  • Positional records
  • with expressions and inheritance
  • Value-based equality and inheritance
  • Top-level programs
  • Improved pattern matching
  • Simple type patterns
  • Relational patterns
  • Logical patterns
  • Improved target typing
  • Target-typed new expressions
  • Covariant returns

Init-only properties

With init-only properties, we can use object initializers with immutable fields.

Let’s create an Avenger class:

public class Avenger
{
    public string FirstName { get; init; }
    public string LastName { get; init; }
}

Now, we can use them with object initializers in C# 9.

using System;

var ironMan = new Avenger { FirstName = "Tony", LastName = "Stark" };
var hulk = new Avenger { FirstName = "Bruce", LastName = "Banner" };
var blackWidow = new Avenger { FirstName = "Natasha", LastName = "Romanova"};

Console.WriteLine($"Iron Man is {ironMan.FirstName} {ironMan.LastName}.");
Console.WriteLine($"The Hulk is {hulk.FirstName} {hulk.LastName}.");
Console.WriteLine($"The Black Widow is {blackWidow.FirstName} {blackWidow.LastName}.");

Now we have two more off the list, hooray!

  • Init-only properties
  • Init accessors and readonly fields
  • Records
  • with expressions
  • Value-based equality
  • Positional records
  • with expressions and inheritance
  • Value-based equality and inheritance
  • Top-level programs
  • Improved pattern matching
  • Simple type patterns
  • Relational patterns
  • Logical patterns
  • Improved target typing
  • Target-typed new expressions
  • Covariant returns

Init accessors and readonly fields

As we’ve learned, init accessors can only be called when you initialize. If you try it after, you’ll be greeted with this:

FirstName cannot be assigned to -- it is read-only

With these init accessors, you can also mutate read-only fields—which you previously could do with constructors. With only changing the Avengers class, do it this way:

class Avenger
{
    private readonly string firstName;
    private readonly string lastName;

    public string FirstName
    {
        get => firstName;
        init => firstName = (value ?? throw new ArgumentNullException(nameof(FirstName)));
    }
    public string LastName
    {
        get => lastName;
        init => lastName = (value ?? throw new ArgumentNullException(nameof(LastName)));
    }
}

OK, one more gone. On to records!

  • Init-only properties
  • Init accessors and readonly fields
  • Records
  • with expressions
  • Value-based equality
  • Positional records
  • with expressions and inheritance
  • Value-based equality and inheritance
  • Top-level programs
  • Improved pattern matching
  • Simple type patterns
  • Relational patterns
  • Logical patterns
  • Improved target typing
  • Target-typed new expressions
  • Covariant returns

Records

Let’s convert our object to a record. This allows an entire object-like construct to be immutable and act like a value.

Change the Avenger object to a record and our program should still run as expected:

record Avenger
{
    public string FirstName { get; init; }
    public string LastName { get; init; }
}

Use with expressions

With records, we can leverage non-destructive mutation, or a way to create new values from existing ones to resemble new state. We can make a copy of a record, and then change what we need. What if the Hulk had a little boy, who joins him and also turns green when he’s angry?

Change the main part of the program to this:

using System;

var hulk = new Avenger { FirstName = "Bruce", LastName = "Banner"};
var babyHulk = hulk with { FirstName = "Baby" };

Console.WriteLine($"The Hulk is {hulk.FirstName} {hulk.LastName}.");
Console.WriteLine($"The Baby Hulk is {babyHulk.FirstName} {babyHulk.LastName}.");

Use with expressions with inheritance

Records support inheritance. Structs do not. We can even use it with the with expression.

Change your program to this, and it works through the power of a hidden virtual method, which I showed off when discussing records in-depth.

using System;

var hulk = new Hulk { FirstName = "Bruce", LastName = "Banner", AngerLevel = 90};
var babyHulk = hulk with { FirstName = "Baby" };

Console.WriteLine($"The Hulk is {hulk.FirstName} {hulk.LastName}, " +
                    $"and anger level is {hulk.AngerLevel}.");
Console.WriteLine($"The Baby Hulk is {babyHulk.FirstName} {babyHulk.LastName}, " +
                    $"and anger level is {babyHulk.AngerLevel}.");

record Hulk : Avenger
{
    public int AngerLevel { get; init; }
}

record Avenger
{
    public string FirstName { get; init; }
    public string LastName { get; init; }
}

Use value-based equality and inheritance

When it comes to records, C# 9 takes care of value-based equality for you through an EqualityContract. Every derived record overrides it and for equal comparison to occur the two objects must both have an EqualityContract.

Because we are being immutable, we can create a new record, set the value back, and see the object comparison returns equal.

using System;

// init-only properties
var hulk = new Hulk { FirstName = "Bruce", LastName = "Banner", AngerLevel = 90};
var babyHulk = hulk with { FirstName = "Baby" };

Console.WriteLine(Object.ReferenceEquals(hulk, babyHulk)); // false
Console.WriteLine(Object.Equals(hulk, babyHulk)); // false

var backToRegularHulk = babyHulk with { FirstName = "Bruce" };

Console.WriteLine(Object.Equals(hulk, backToRegularHulk)); // true

record Hulk : Avenger
{
    public int AngerLevel { get; init; }
}

record Avenger
{
    public string FirstName { get; init; }
    public string LastName { get; init; }
}

Positional records

We can take a positional approach, where we hand out data from constructor arguments and we can use deconstruction to extract data.

using System;

var hulk = new Avenger("Bruce", "Banner");
var (first, last) = hulk;

Console.WriteLine(first);
Console.WriteLine(last);

public record Avenger
{
    string FirstName;
    string LastName;
    public Avenger(string firstName, string lastName) 
      => (FirstName, LastName) = (firstName, lastName);
    public void Deconstruct(out string firstName, out string lastName) 
      => (firstName, lastName) = (FirstName, LastName);
}

That should do it for records! How are we doing so far?

  • Init-only properties
  • Init accessors and readonly fields
  • Records
  • with expressions
  • Value-based equality
  • Positional records
  • with expressions and inheritance
  • Value-based equality and inheritance
  • Top-level programs
  • Improved pattern matching
  • Simple type patterns
  • Relational patterns
  • Logical patterns
  • Improved target typing
  • Target-typed new expressions
  • Covariant returns

Improved pattern matching

C# 9 offers improved pattern matching for simple type patterns, relational patterns, and logical patterns.

In this example, we will calculate a monthly insurance cost depending on the Hulk’s AngerLevel. Our records look like this:

record Hulk : Avenger
{
    public int AngerLevel { get; init; }
}

record Avenger
{
    public string FirstName { get; init; }
    public string LastName { get; init; }
}

Simple type patterns

Using simple type patterns, we don’t need that discard (_) operator to just declare the type. Now, we can just say Hulk => 1000 in CalculateInsuranceCost:

static int CalculateInsuranceCost(object hulk) =>
    hulk switch
    {
        Hulk t when t.AngerLevel > 90 => 10000,
        Hulk t when t.AngerLevel < 20 => 100,
        Hulk => 1000,

        _ => throw new ArgumentException("Not a known hulk", nameof(hulk))
    ;

Relational patterns

With relational patterns, we can use <, >, and so on with pattern matching. Take a look at this CalculateInsuranceCost method:

static int CalculateInsuranceCost(Hulk hulk) =>
  hulk.AngerLevel switch
  {
    > 90 => 10000,
    < 20 => 100,
    _ => 1000,
  };

Logical patterns

With logical patterns, you can use and, or, and not:

static int CalculateInsuranceCost(Hulk hulk) =>
  hulk.AngerLevel switch
  {
    > 90 => 10000,
    > 20 and < 90 => 100,
    _ => 1000,
  };

How’s our scavenger hunt going? Are we done yet?

  • Init-only properties
  • Init accessors and readonly fields
  • Records
  • with expressions
  • Value-based equality
  • Positional records
  • with expressions and inheritance
  • Value-based equality and inheritance
  • Top-level programs
  • Improved pattern matching
  • Simple type patterns
  • Relational patterns
  • Logical patterns
  • Improved target typing
  • Target-typed new expressions
  • Covariant returns

Target typing with new expressions

Great! Let’s scratch another quick one off the list with some target typing.

Let’s have an Avenger class like so:

public class Avenger
{
    private string _firstName;
    private string _lastName;

    public Avenger(string firstName, string lastName)
    {
        _firstName = firstName;
        _lastName = lastName;
    }
}

Now we can use it with new expressions:

using System;

Avenger ironMan = new ("Tony", "Stark");
Avenger hulk = new ("Bruce", "Banner");
Avenger blackWidow = new ("Natasha", "Romanova");

We can also see its benefit when creating collections:

using System;

var avengerList = new List<Avenger>
{
  new ("Tony", "Stark"),
  new ("Bruce", "Banner"),
  new ("Natasha", "Romanova"),
};

Covariant returns

One item left: covariant returns. If you read the last post, this should be familiar to you. With return type covariance, you can override a base class method (that has a less-specific type) with one that returns a more specific type. To return some new objects, you could try this.

public virtual Avenger GetHero()
{
    // this is the parent (or base) class
    return new Avenger();
}

public override Hulk GetHero()
{
    // better!
    return new Hulk();
}

We did it!

  • Init-only properties
  • Init accessors and readonly fields
  • Records
  • with expressions
  • Value-based equality
  • Positional records
  • with expressions and inheritance
  • Value-based equality and inheritance
  • Top-level programs
  • Improved pattern matching
  • Simple type patterns
  • Relational patterns
  • Logical patterns
  • Improved target typing
  • Target-typed new expressions
  • Covariant returns

And that’s it, friends

Thank you so much for reading and providing feedback on all these articles. This won’t be the last I write about C#, obviously, but it does conclude my first pass on C# 9 features. As always, I’m excited to hear how things are going for you—don’t be shy!

]]>
<![CDATA[ The .NET Stacks #8: functional C# 9, .NET Foundation nominees, Azure community, more! ]]> https://www.daveabrock.com/2020/07/18/dotnet-stacks-8/ 608c3e3df4327a003ba2fe46 Fri, 17 Jul 2020 19:00:00 -0500 This is an archive of my weekly (free!) newsletter, -The .NET Stacks-. Consider subscribing today to get this content right away! Subscribers don’t have to wait a week to receive the content.

On tap this week:

  • C# 9: a functionally better release
  • The .NET Foundation nominees are out!
  • Dev Discussions: Michael Crump
  • Community roundup

C# 9: a functionally better release

I’ve been writing a lot about C# 9 lately. No, seriously: a lot. This week I went a little nuts with three posts: I talked about records, pattern matching, and top-level programs. I’ve been learning a ton, which is always the main goal, but what’s really interesting is how C# is starting to blur the lines between object-oriented and functional programming. Throughout the years, we’ve seen some FP concepts visit C#, but I feel this release is really kicking it up a notch.

In the not-so-distant past, discussing FP and OO meant putting up with silly dogmatic arguments that they have to be mutually exclusive. It isn’t hard to understand why: traditional concepts of OO constructs are grouping data and behavior (state) in single mutable objects, and FP draws a hard line between data and behavior in the name of purity and minimizing side effects (immutability by default).

So, typically as a .NET developer, this left you with two choices: C#, .NET’s flagship language, or F#, a wonderful functional language that is concise (no curlies or semi-colons and great type inference), convenient (functions as first-class objects), and has default immutability.

However, this is no longer a binary choice. For example, let’s look at a blog post from a few years ago that maps C# concepts to F# concepts.

  • C#/OO has variables, F#/FP has immutable values. C# 9 init-only properties and records bring that ability to C#.
  • C# has statements, F# has expressions. C# 8 introduced switch expressions and enhanced pattern matching, and has more expressions littered throughout the language now.
  • C# has objects with methods, F# has types and functions. C# 9 records are also blurring the line in this regard.

So here we are, just years after wondering if F# will ever take over C#, we see people wondering the exact opposite as Isaac Abraham asks: will C# replace F#? (Spoiler alert: no.)

There is definitely pushback in the community from C# 8 purists, to which I say: why not both? You now have the freedom to “bring in” the value of functional programming, while doing it in a familiar language. You can bring in these features, along with C#’s compatibility. These changes will not break the language. And if they don’t appeal to you, you don’t have to use them. (Of course, mixing FP and OO in C# is not always graceful and is definitely worth mentioning.)

This isn’t a C# vs F# rant, but it comes down to this: is C# with functional bits “good enough” because of your team’s skillset, comfort level, and OO needs? Or do you need a clean break, and immutability by default? As for me, I enjoy seeing these features gradually introduced. For example, C# 9 records allow you to build immutable structures but the language isn’t imposing this on you for all your objects. You need to opt in.

A more nuanced question to ask is: will C#’s functional concepts ever overpower the language and tilt the scales in FP’s direction? Soon, I’ll be interviewing Phillip Carter (the PM for F# at Microsoft) and am curious to hear what he has to say about it. Any questions? Let me know soon and I’ll be sure to include them.

The .NET Foundation nominees are out

This week, the .NET Foundation announced the Board of Director nominees for the 2020 campaign. I am familiar with most of these folks (a few are subscribers, hi!)—it’s a very strong list and you probably can’t go wrong with anyone. I’d encourage you to look at the list and all their profiles to see who you’d like to vote for (if you are a member). If not, you can apply for membership. Or, if you’re just following the progress of the foundation, that’s great too.

I know I’ve talked a lot about the Foundation lately, but this is an important moment for the .NET Foundation. The luster has worn off and it’s time to address the big questions: what exactly is the Foundation responsible for? Where is the line between “independence” and Microsoft interests? When OSS projects collide with Microsoft interests, what is the process to work through it? And will the Foundation commit itself to open communication and greater transparency?

As for me, these are the big questions I hope the nominees are thinking about, among other things.

Dev Discussions: Michael Crump

If you’ve worked on Azure, you’ve likely come across Michael Crump’s work. He started Azure Tips and Tricks, a collection of tips, videos, and talks—if it’s Azure, it’s probably there. He also runs a popular Twitch stream where he talks about various topics.

I caught up with Michael to talk about how he got to working on Azure at Microsoft, his work for the developer community, and his programming advice.

Michael Crump

My crack team of researchers tell me that you were a former Microsoft Silverlight MVP. Ah, memories. Do you miss it?

Ah, yes. I was a Microsoft MVP for 4 years, I believe. I spent a lot of time working with Silverlight because, at that time, I was working in the medical field and a lot of our doctors used Macs. Since I was a C# WinForms/WPF developer, I jumped at the chance to start using those skillsets for code that would run on PCs and Macs.

you walk me through your path to Microsoft, and what you do at Microsoft now?

I started in Mac tech support because after graduating college, Mac tech support agents were getting paid more than PC agents (supply and demand, I guess!). Then, I was a full-time software developer for about 8 years. I worked in the medical field and created a calculator that determined what amount of vitamins our pre-mature babies should take.

Well, after a while, the stress got to me and I discovered my love for teaching and started a job at Telerik as a developer advocate. Then, the opportunity came at Microsoft for a role to educate and inspire application developers. So my role today consists of developer content in many forms, and helping to set our Tier 1 event strategy for app developers.

Tell us a little about Azure Tips and Tricks. What motivated you to get started, and how can people get involved?

Azure Tips and Tricks was created because I’d find a thing or two about Azure, and forget how to do it again. It was originally designed as something just for me but many blog aggregators starting picking up on the posts and we decided to go big with it—from e-books, blog posts, videos, conference talks and stickers.

The easiest way to contribute is by clicking on the Edit Page button at the bottom of each page. You can also go to http://source.azuredev.tips to learn more.

What made you get into Twitch? What goes on in your channel?

I loved the ability to actually code and have someone watch you and help you code. The interactivity aspect and seeing the same folks come back gets you hooked.

The stream is broken down into three streams a week:

  • Azure Tips and Tricks, every Wednesday at 1 PM PST (Pacific Standard Time, America)
  • Live Interviews with Developers, every Friday at 9 AM PST (Pacific Standard Time, America)
  • Live coding/Security Sunday streams, Sundays at 10:30 AM PST (Pacific Standard Time, America)

What is your one piece of programming advice?

I actually published a list of my top 12 things every developer should know.

My top one would probably be to learn a different programming language (other than your primary language). Simply put, it broadens your perspective and permits a deeper understanding of how a computer and programming languages work.

This is only an excerpt of my talk with Michael. Read the full interview over at my website.

Community roundup

An extremely busy week, full of great content!

Microsoft

Announcements

Videos

Blog posts

Community Blogs

ASP.NET Core

Blazor

Entity Framework

Languages

Azure

Xamarin

Tools

Projects

Community podcasts and videos

New subscribers and feedback

Has this email been forwarded to you? Welcome! I’d love for you to subscribe and join the community. I promise to guard your email address with my life.

I would love to hear any feedback you have for The .NET Stacks! My goal is to make this the one-stop shop for weekly updates on developing in the .NET ecosystem, so I look forward to any feedback you can provide. You can directly reply to this email, or talk to me on Twitter as well. See you next week!

]]>
<![CDATA[ C# 9: Answering your questions ]]> https://www.daveabrock.com/2020/07/17/c-sharp-9-q-and-a/ 608c3e3df4327a003ba2fe45 Thu, 16 Jul 2020 19:00:00 -0500 Note: Originally published five months before the official release of C# 9, I’ve updated this post after the release to capture the latest updates.

In the last month or so, I’ve written almost 8,000 words about C# 9. That seems like a lot (and it is!) but there is so much to cover! I’ve talked about how it reduces mental energy, simplifies null validation, and took on a deep dive series featuring init-only features, records, pattern matching, top-level programs, and target typing and covariant returns.

After publishing all these posts, I received a lot of great questions in my Disqus comment section. Instead of burying the conversations there, I’d like to discuss these questions in case you missed them. I learned a lot from the questions, so thank you all!

Init-only features

From the post on init-only features, we had two questions:

Fernando Margueirat asks: What’s the difference between init and readonly?

The big difference is that with C# 9 init-only properties you are allowed to use object initialization. With readonly properties, you cannot.

The Microsoft announcement says: “The one big limitation today is that the properties have to be mutable for object initializers to work … first call the object’s constructor and then assigning to the property setters.” Because readonly value types are immutable, you can’t use them with object initializers.

saint4eva asks: Can a get-only property provide the same level of immutability as an init-only property?

Similar to the last question, using init-only properties allow for initialization, while get-only properties are read-only and do not.

Records

From the post on record types, we also had one and a half questions:

WOBFIE says: So we should use so called “records” just because… some monkey encoded “struct” as “class”?!

OK, this is less of a question and more of something that made me laugh (and why I say this is half of a question, as much as I love all my readers). But: let’s read between the lines of what WOBFIE might be thinking—something along the lines of this being a hacked together struct?

In the post itself, I explained the rationale for adding a new construct over building on top of struct.

  • An easy, simplified construct whose intent is to use as an immutable data structure with easy syntax, like with expressions to copy objects
  • Robust equality support with Equals(object), IEquatable<T>, and GetHashCode()
  • Constructor/deconstructor support with simplified positional records

The endgame is not to complicate workarounds—it is to devote a construct for immutability that doesn’t require a lot of wiring up from your end.

Daniel DF says: I would imagine that Equals performance decreases with the size of the record particularly when comparing two objects that are actually equal. Is that true?

That is a wonderful question. Since I was unsure, I reached out to the language team on their Gitter channel. I got an answer within minutes, so thanks to them!

Here is what Cyrus Najmabadi says:

Equals is spec’ed at the language to do pairwise equality of the record members.
they have value-semantics
in general, the basic approach of implementing this would likely mean you pay more CPU for equality checks. Though the language doesn’t concern itself with that. It would be an implementation detail of the compiler.

Target typing and covariant returns

From my post on target typing and covariant returns, we had one question from two different readers.

Pavel Voronin and saint4eva ask: Are covariant return types a runtime feature or is it just a language sugar?

This was another question I sent to the team. Short answer: covariant return types are a runtime feature.

Long answer: it could have been implemented as syntactic sugar only using stubs—but the team was concerned about it leading to worse performance and increased code bloat when working with a lot of nested hierarchies. Therefore, they went with the runtime approach.

Also, while in the Gitter channel I learned that covariant returns are only supported for classes as of now. The team will look to address interfaces at a later time.

]]>
<![CDATA[ C# 9 Deep Dive: Target Typing and Covariant Returns ]]> https://www.daveabrock.com/2020/07/14/c-sharp-9-target-typing-covariants/ 608c3e3df4327a003ba2fe44 Mon, 13 Jul 2020 19:00:00 -0500 We’ve been quite busy, my friends. In this C# 9 deep dive series, we’ve looked at init-only features, records, pattern matching, and then top-level programs. To complete this series (before showing off everything in a single app), we’ll discuss the last two items featured in the Build 2020 announcement: target typing and covariant returns. These are not related, but I’ve decided to bundle these in a single blog post.

This is the fifth post in a six-post series on C# 9 features in-depth:

This post covers the following topics.

Improved target typing

C# 9 includes improved support for target typing. What is target typing, you say? It’s what C# uses, normally in expressions, for getting a type from its context. A common example would be the use of the var keyword. The type can be inferred from its context, without you needing to explicitly declare it.

The improved target typing in C# 9 comes in two flavors: new expressions and target-typing ?? and ?:.

Target-typed new expressions

With target-typed new expressions, you can leave out the type you instantiate. At first glance, this appears to only work with direct instantiation and not coupled with var or constructs like ternary statements.

Let’s take a condensed Person class from previous posts:

public class Person
{
    private string _firstName;
    private string _lastName;

    public Person(string firstName, string lastName)
    {
        _firstName = firstName;
        _lastName = lastName;
    }
}

To instantiate a new Person, you can omit the type on the right-hand side of the equality statement.

class Program
{
    static void Main(string[] args)
    {
        Person person = new ("Tony", "Stark");
    }
}

A big advantage to target-typed new expressions is when you are initializing new collections. If I wanted to create a list of multiple Person objects, I wouldn’t need to worry about including the type every time I create a new object.

With the same Person class in place, you can change the Main function to do this:

class Program
{
    static void Main(string[] args)
    {
        var personList = new List<Person>
        {
            new ("Tony", "Stark"),
            new ("Howard", "Stark"),
            new ("Clint", "Barton"),
            new ("Captain", "America")
            // ...
        };
    }
}

Target typing with conditional operators

Speaking of ternary statements, we can now infer types by using the conditional operators. This works well with ??, the null-coalescing operator. The ?? operator returns the value of what’s on the left if it is not null. Otherwise, the right-hand side is evaluated and returned.

So, imagine we have some objects that shared the same base class, like this:

public class Person
{
    private string _firstName;
    private string _lastName;

    public Person(string firstName, string lastName)
    {
        _firstName = firstName;
        _lastName = lastName;
    }
}

public class Student : Person
{
    private string _favoriteSubject;

    public Student(string firstName, string lastName, string favoriteSubject) : base(firstName, lastName)
    {
        _favoriteSubject = favoriteSubject;
    }
}

public class Superhero : Person
{
    private string _maxSpeed;

    public Superhero(string firstName, string lastName, string maxSpeed) : base(firstName, lastName)
    {
        _maxSpeed = maxSpeed;
    }
}

While the code below does not get past the compiler in C# 8, it will in C# 9 because there’s a target (base) type that is convert-able:

static void Main(string[] args)
{
    Student student = new Student ("Dave", "Brock", "Programming");
    Superhero hero = new Superhero ("Tony", "Stark", "10000");

    Person anotherPerson = student ?? hero;
}

Covariant returns

It has been a long time, coming—almost two decades of begging and pleading, actually. With C# 9, it looks like return type covariance is finally coming to the language. You can now say bye-bye to implementing some interface workarounds. OK, so just saying return type covariance makes me sound super smart, but what is it?

With return type covariance, you can override a base class method (that has a less-specific type) with one that returns a more specific type.

Before C# 9, you would have to return the same type in a situation like this:

public virtual Person GetPerson()
{
    // this is the parent (or base) class
    return new Person();
}

public override Person GetPerson()
{
    // you can return the child class, but still return a Person
    return new Student();
}

Now, you can return the more specific type in C# 9.

public virtual Person GetPerson()
{
    // this is the parent (or base) class
    return new Person();
}

public override Student GetPerson()
{
    // better!
    return new Student();
}

Wrapping up

In this post, we discussed how C# 9 makes improvements with target types and covariant returns. We discussed target-typing new expressions and their benefits (especially when initializing collections). We also discussed target typing with conditional operators. Finally, we discussed the long-awaited return type covariance feature in C# 9.

]]>
<![CDATA[ The .NET Stacks #7: Azure SDKs, testing, community roundup, more! ]]> https://www.daveabrock.com/2020/07/12/dotnet-stacks-7/ 608c3e3df4327a003ba2fe43 Sat, 11 Jul 2020 19:00:00 -0500 This is an archive of my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away! Subscribers don’t have to wait a week to receive the content.

On tap this week:

  • A look at the Azure SDKs
  • What’s your test automation maturity level?
  • The evolution (and optics) of how far .NET has come
  • Community roundup

A look at the Azure SDKs

If you work with Azure SDKs, have you looked into the improvements lately in the last 6-9 months? (You can catch up quickly with a quick video Microsoft pushed out last month.)

Here’s the gist: before the new SDKs were released, each Azure team created their own SDKs in their own repositories, with their own CI process, with varying levels of OS support, language support, and so on. The new SDKs promise new common guidelines, centralized systems and repositories, and consistent security and support. These SDKs deliver language support for .NET, Java, Python, and JavaScript/TypeScript, with a promise of more languages to come. Most importantly, packages have been released to all the common package managers, meeting developers where they are—instead of asking them to conform to the whims of each API.

Microsoft has a landing page of sorts where you can review releases, feature updates, and API information.

What’s your test automation maturity level?

From the Applitools blog, Angie Jones published a nice piece on how to measure your test automation maturity. While this is tech-agnostic, it’s worth a mention in this space as I found it valuable for me to think more critically about my own testing approach.

I found it interesting that, in her research, she found that 60% of the teams no longer have a distinction between development and QA engineers. Score one for end-to-end ownership and the DevOps movement.

As you read the article, you can give your team its own testing maturity score based on:

  • Your ability to automate unit tests, web tests, API tests, security tests, performance tests, and accessibility tests
  • If you test across browsers, devices, and screen sizes
  • If tests are executed in your CI/CD pipeline
  • If you use feature flags (shameless plug for my series on .NET feature toggles)

What was your score?

The evolution (and optics) of how far .NET has come

If you spend any time looking at #dotnet Twitter, this week was an interesting one. Billy Collins (@BlazorGuy) tweeted, “What’s up with non-.NET developers thinking C# is a Windows only, corporate bloatware language? It’s not 2005 anymore!” This caused another discussion spearheaded by ASP.NET Core architect David Fowler. Microsoft is definitely having to strike a fine balance of attracting modern development, yet supporting enterprise (legacy) customers.

The big challenge Microsoft still faces is trying to get developers who haven’t worked on .NET in awhile—and have perhaps ditched it for faster, cross-platform solutions—that .NET has a cutting-edge platform again, and has been for the last 5 years. Their persistence to keep the .NET/ASP.NET name on every iteration of the platform has, I believe, done Microsoft a disservice. Let’s say I ditched .NET in the old Framework days and only have a passing interest in .NET. If I hear things like .NET Core, .NET Standard, and .NET 5, do I believe things have changed without researching more?

As Fowler tweeted, “This is why timing and marketing is super important in product development. Not only is it hard to erase search history for something that’s been around for 20 years, but you have to deal with the impressions people has years ago about a “new product” with the same name. At the same time, you want to leverage the existing ecosystem so you’re not starting from scratch.”

With all that said, .NET 5 promises a unified framework and a stable, predictable release schedule—with a new major version every November. In many ways, this is a realization of a vision Microsoft developed many years ago with a unified, fast, cross-platform framework. And people can run apps without a separate deploy of .NET Core! These are exciting times. Hopefully, the community perception continues to improve as a result.

Community roundup

From Microsoft

Blog posts

Podcasts/videos

New subscribers and feedback

Has this email been forwarded to you? Welcome! I’d love for you to subscribe and join the community. I promise to guard your email address with my life.

I would love to hear any feedback you have for The .NET Stacks! My goal is to make this the one-stop shop for weekly updates on developing in the .NET ecosystem, so I look forward to any feedback you can provide. You can directly reply to this email, or talk to me on Twitter as well. See you next week!

]]>
<![CDATA[ Dev Discussions - Michael Crump on contributing to the Azure community ]]> https://www.daveabrock.com/2020/07/10/dev-discussions-michael-crump/ 608c3e3df4327a003ba2fe42 Thu, 09 Jul 2020 19:00:00 -0500 This is the full interview from my discussion with Michael Crump in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away! Subscribers don’t have to wait two weeks to receive the content.

If you’ve worked on Azure, you’ve likely come across Michael Crump’s work. He started Azure Tips and Tricks, a collection of tips, videos, and talks—if it’s Azure, it’s probably there. He also runs a popular Twitch stream where he talks about various topics.

I caught up with Michael to talk about how he got to working on Azure at Microsoft, his work for the developer community, and his programming advice. (We talk Silverlight, but only in passing—no need to click away.)

Michael Crump

My crack team of researchers tell me that you were a former Microsoft Silverlight MVP. Ah, memories. Do you miss it?

Ah, yes. I was a Microsoft MVP for 4 years, I believe. I spent a lot of time working with Silverlight because, at that time, I was working in the medical field and a lot of our doctors used Macs. Since I was a C# WinForms/WPF developer, I jumped at the chance to start using those skillsets for code that would run on PCs and Macs.

Can you walk me through your path to Microsoft, and what you do at Microsoft now?

I started in Mac tech support because after graduating college, Mac tech support agents were getting paid more than PC agents (supply and demand, I guess!). Then, I was a full-time software developer for about 8 years. I worked in the medical field and created a calculator that determined what amount of vitamins our pre-mature babies should take.

Well, after a while, the stress got to me and I discovered my love for teaching and started a job at Telerik as a developer advocate. Then, the opportunity came at Microsoft for a role to educate and inspire application developers. So my role today consists of developer content in many forms, and helping to set our Tier 1 event strategy for app developers.

What is the coolest thing about Azure for developers that not a lot of folks know about?

There are a ton of free services that you can run forever without paying a dime such as a web app. Of course, there are limitations such as it using a shared instance, no deployment slots or custom domains, and so on. But they are still handy for those smaller projects.

Do you have any projects you’ve been working on that you want to show off?

I’d lean on two projects:

Tell us a little about Azure Tips and Tricks. What motivated you to get started, and how can people get involved?

Azure Tips and Tricks was created because I’d find a thing or two about Azure, and forget how to do it again. It was originally designed as something just for me but many blog aggregators starting picking up on the posts and we decided to go big with it—from e-books, blog posts, videos, conference talks and stickers.

The easiest way to contribute is by clicking on the Edit Page button at the bottom of each page. You can also go to http://source.azuredev.tips to learn more.

What made you get into Twitch? What goes on in your channel?

I loved the ability to actually code and have someone watch you and help you code. The interactivity aspect and seeing the same folks come back gets you hooked.

The stream is broken down into three streams a week:

  • Azure Tips and Tricks, every Wednesday at 1 PM PST (Pacific Standard Time, America)
  • Live Interviews with Developers, every Friday at 9 AM PST (Pacific Standard Time, America)
  • Live coding/Security Sunday streams, Sundays at 10:30 AM PST (Pacific Standard Time, America)

What is your one piece of programming advice?

I actually published a list of my top 12 things every developer should know.

My top one would probably be to learn a different programming language (other than your primary language). Simply put, it broadens your perspective and permits a deeper understanding of how a computer and programming languages work.

]]>
<![CDATA[ C# 9 Deep Dive: Top-Level Programs ]]> https://www.daveabrock.com/2020/07/09/c-sharp-9-top-level-programs/ 608c3e3df4327a003ba2fe41 Wed, 08 Jul 2020 19:00:00 -0500 This is the fourth post in a six-post series on C# 9 features in-depth:

Typically, when you learn to write a new C# console application, you are required by law to start with something like this:

class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine("Hello, world!");
    }
}

Imagine you’re trying to teach someone how a program works. Before you even execute a line of code, you need to talk about:

  • What are classes?
  • What is a function?
  • What is this args string array?

Sure, to you and me it likely won’t be long before that needs to come up but the barrier for entry becomes higher—especially when you look at how simple it is to get started with something like Python or JavaScript.

This post covers the following topics.

The simple, obligatory “Hello, world” example

With C# 9 top-level programs, you can take away the Main method and condense it to something like this:

using System;

Console.WriteLine("Hello, world!");

And, don’t worry: I know what you’re thinking. Let’s make this a one-liner.

System.Console.WriteLine("Hello, world!");

If you look at what Roslyn generates, from Sharplab, nothing should shock you:

[CompilerGenerated]
internal static class $Program
{
    private static void $Main(string[] args)
    {
        Console.WriteLine("Hello, world!");
    }
}

No surprise here. It’ll generate a Program class and the traditional main method for you.

Can we be honest? I thought this was where it ended: a nice, clean way to simplify a console app. But! As I read the Welcome to C# 9 announcement a little closer, Mads Torgersen writes:

If you want to return a status code you can do that. If you want to await things you can do that. And if you want to access command line arguments, args is available as a “magic” parameter…Local functions are a form of statement and are also allowed in the top level program.

That is super interesting. Let’s try it out and see what happens in Sharplab.

Return a status code

We can return anything from our top-level program. If we return 0 like we did in the good old days, let’s do this:

System.Console.WriteLine("Hello, world!");
return 0;

Roslyn gives us this:

[CompilerGenerated]
internal static class $Program
{
    private static int $Main(string[] args)
    {
        Console.WriteLine("Hello, world!");
        return 0;
    }
}

Await things

Mads says we can await things. Let’s await something—let’s call the icanhazdadjoke API, shall we?

Let’s try this code:

using System.Net.Http;
using System;
using System.Net.Http.Headers;

using (var httpClient = new HttpClient())
{
    httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("text/plain"));
    Console.WriteLine(httpClient.GetStringAsync(new Uri("https://icanhazdadjoke.com")).Result);
}

As you can see, nothing it can’t handle:

[CompilerGenerated]
internal static class $Program
{
    private static void $Main(string[] args)
    {
        HttpClient httpClient = new HttpClient();
        try
        {
            httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("text/plain"));
            Console.WriteLine(httpClient.GetStringAsync(new Uri("https://icanhazdadjoke.com")).Result);
        }
        finally
        {
            if (httpClient != null)
            {
                ((IDisposable)httpClient).Dispose();
            }
        }
    }
}

OK, so I called GetStringAsync but I kinda lied—I haven’t done an await or returned a Task.

If we do this thing:

using System.Threading.Tasks;

await Task.CompletedTask;
return 0;

Watch what happens! We’ve got a TaskAwaiter and an AsyncStateMachine. What, you thought async was easy? Good thing: it’s relatively easy with top-level functions.

[CompilerGenerated]
internal static class $Program
{
    [StructLayout(LayoutKind.Auto)]
    private struct <$Main>d__0 : IAsyncStateMachine
    {
        public int <>1__state;

        public AsyncTaskMethodBuilder<int> <>t__builder;

        private TaskAwaiter <>u__1;

        private void MoveNext()
        {
            int num = <>1__state;
            int result;
            try
            {
                TaskAwaiter awaiter;
                if (num != 0)
                {
                    awaiter = Task.CompletedTask.GetAwaiter();
                    if (!awaiter.IsCompleted)
                    {
                        num = (<>1__state = 0);
                        <>u__1 = awaiter;
                        <>t__builder.AwaitUnsafeOnCompleted(ref awaiter, ref this);
                        return;
                    }
                }
                else
                {
                    awaiter = <>u__1;
                    <>u__1 = default(TaskAwaiter);
                    num = (<>1__state = -1);
                }
                awaiter.GetResult();
                result = 0;
            }
            catch (Exception exception)
            {
                <>1__state = -2;
                <>t__builder.SetException(exception);
                return;
            }
            <>1__state = -2;
            <>t__builder.SetResult(result);
        }

        void IAsyncStateMachine.MoveNext()
        {
            //ILSpy generated this explicit interface implementation from .override directive in MoveNext
            this.MoveNext();
        }

        [DebuggerHidden]
        private void SetStateMachine(IAsyncStateMachine stateMachine)
        {
            <>t__builder.SetStateMachine(stateMachine);
        }

        void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine)
        {
            //ILSpy generated this explicit interface implementation from .override directive in SetStateMachine
            this.SetStateMachine(stateMachine);
        }
    }

    [AsyncStateMachine(typeof(<$Main>d__0))]
    private static Task<int> $Main(string[] args)
    {
        <$Main>d__0 stateMachine = default(<$Main>d__0);
        stateMachine.<>t__builder = AsyncTaskMethodBuilder<int>.Create();
        stateMachine.<>1__state = -1;
        stateMachine.<>t__builder.Start(ref stateMachine);
        return stateMachine.<>t__builder.Task;
    }

    private static int <Main>(string[] args)
    {
        return $Main(args).GetAwaiter().GetResult();
    }
}

Access command-line arguments

A nice benefit here is that, like with a command line program, you can specify command line arguments. This is typically done by parsing the args[] that you pass into your Main method, but how is this possible with no Main method to speak of?

The args are available as a “magic” parameter, meaning you should be able to access them without passing them in. MAGIC.

Let’s say I wanted something like this:

using System;

var param1 = args[0];
var param2 = args[1];

Console.WriteLine($"Your params are {param1} and {param2}.");

Here’s what Roslyn does:

[CompilerGenerated]
internal static class $Program
{
    private static void $Main(string[] args)
    {
        string text = args[0];
        string text2 = args[1];
        string[] array = new string[5];
        array[0] = "Your params are ";
        array[1] = text;
        array[2] = " and ";
        array[3] = text2;
        array[4] = ".";
        Console.WriteLine(string.Concat(array));
    }
}

Local functions

Now, for my last trick, local functions.

Let’s whip us this code to test out our top-level program.

using System;

DaveIsTesting();

void DaveIsTesting()
{
    void DaveIsTestingAgain()
    {
        Console.WriteLine("Dave is testing again.");
    }
    Console.WriteLine("Dave is testing.");
    DaveIsTestingAgain();
}

I have to admit, I’m pretty excited to see what Roslyn decides to do on this one:

[CompilerGenerated]
internal static class $Program
{
    private static void $Main(string[] args)
    {
        <$Main>g__DaveIsTesting|0_0();
    }

    internal static void <$Main>g__DaveIsTesting|0_0()
    {
        Console.WriteLine("Dave is testing.");
        <$Main>g__DaveIsTestingAgain|0_1();
    }

    internal static void <$Main>g__DaveIsTestingAgain|0_1()
    {
        Console.WriteLine("Dave is testing again.");
    }
}

Or not. They are just split out into different functions in my class. Carry on.

Wrapping up

In this post, we’ve put top-level programs through its paces by seeing how it works with status code, async calls, command line arguments, and local functions. I’ve found that this can be a lot more powerful that slimming down lines of code.

What have you used it for, so far? Anything I miss? Let me know in the comments.

]]>
<![CDATA[ C# 9 Deep Dive: Records ]]> https://www.daveabrock.com/2020/07/06/c-sharp-9-deep-dive-records/ 608c3e3df4327a003ba2fe3f Sun, 05 Jul 2020 19:00:00 -0500 Note: Originally published five months before the official release of C# 9, I’ve updated this post after the release to capture the latest updates.

In the previous post of this series, we discussed the init-only features of C# 9, which allowed you to make individual properties immutable. That works great on a case-by-case basis, but the real power in leveraging C# immutability is when you can do this for custom types. This is where records shine, and will be the focus of this post.

This is the second post in a six-post series on C# 9 features in-depth:

This post covers the following topics.

A quick primer on immutable types

Before we get started, let’s briefly talk about the concept of immutable types. For sure, immutability is a stuffy word to some but the premise here is simple: once instantiated or initialized, immutable types never change.

What do you get with immutable types? First, you get simplicity. An immutable object only has one state, the state you specified when you created the object. You’ll also see that they are secure and thread-safe with no required synchronization. Because you don’t have threads fighting to change an object, they can be shared freely in your applications.

Put another way: immutable types reduce risk, are safer, and help to prevent a lot of nasty bugs that occur when you update your objects.

Even if you aren’t familiar with this concept yet, or haven’t been forced to think this way, you’re already using it in the .NET world. For example, System.DateTime is immutable, as are strings. And now with records in C# 9, you can create your own immutable types.

OK, so what is a record?

What is a record, exactly? A record is a construct that allows you to encapsulate property state. (I am avoiding the use of the word object, for clarity.) Or, put in less geeky terms, records allow you to perform value-like behaviors on properties. This is why I’m avoiding saying “objects” when I speak of records. We need to start thinking in terms of data, and not objects. Records aren’t meant for mutable state—if you want to represent change, create a new record. That way, you define them by working with the data, and not passing around a single object that gets changed by multiple functions.

Records are incredibly flexible. Anthony Giretti found that classes can have records as properties, and also that records can contain structs and objects.

What’s the difference between structs and records?

Allow me to read your mind—you might be asking: how is this different than structs? If you haven’t used it before, we can define a value type, called a struct, in C# today. So, why don’t we just build on the struct functionality instead of introducing a new member (ha!) to C#? After all, you can declare an immutable value type by saying readonly struct.

Records are offering the following advantages, from what I can see:

  • An easy, simplified construct whose intent is to use as an immutable data structure with easy syntax, like with expressions to copy objects (keep reading for details!)
  • Robust equality support with Equals(object), IEquatable<T>, and GetHashCode()
  • Constructor/deconstructor support with simplified positional (constructor-based) records

Of course you can do this with structs, and even classes, but this requires tedious boilerplate. The idea here is to have a construct that is simple and straightforward to implement.

UPDATE: One of the biggest draws of records over structs is the reduced memory allocation that is required. Since C# records are compiled to reference types behind the scenes, they are accessed by a reference and not as a copy. As a result, no additional memory allocation is required other than the original record allocation. Thanks to commenter Tecfield for mentioning this!

Thanks to Isaac Abraham for the correction concerning memory allocation (and confirmed by C# lead designer Mads Torgersen):

Hopefully by now, if I did my job, you know what records are and the rationale for them. Let’s see some code.

Create your first record

To declare a record, you use the new record keyword. Brilliant, right?

public record Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string FavoriteColor { get; set; }
    // and so on...
}

When you mark a type as record like this, it won’t give you immutability on its own—you’ll need to use init properties, like in the following example:

public record Person
{
    public string FirstName { get; init; }
    public string LastName { get; init; }
    public string Address { get; init; }
    public string City { get; init; }
    public string FavoriteColor { get; init; }
    // and so on...
}

To achieve default immutability, you can create objects by using positional arguments (constructor-like syntax). When you do this, you can declare records with one line:

public record Person(string FirstName, string LastName, string Address, string City, string FavoriteColor);

For more details, check out my post: Are C# 9 records immutable by default?

Use with expressions with records

Before C# 9, you would likely represent new state by creating new values from existing ones.

var person = new Person
{
    FirstName = "Tony",
    LastName = "Stark",
    Address = "10880 Malibu Point",
    City = "Malibu",
    FavoriteColor = "red"
};

var newPerson = person;
newPerson.FirstName = "Howard";
newPerson.City = "Pasadena";

This pattern is referred to as non-destructive mutation (now that will make you seem smart at your next dinner party!). C# 9 has a new type of expression, a with expression, to assist you.

This functionality is only available in records, and not structs or classes.

var person = new Person
{
    FirstName = "Tony",
    LastName = "Stark",
    Address = "10880 Malibu Point",
    City = "Malibu",
    FavoriteColor = "red"
};

var newPerson = person with { FirstName = "Howard", City = "Pasadena" };

You can easily use your familiar object initializer syntax to differentiate what has changed between objects, including multiple properties. Under the covers, the record has a protected copy constructor. If you wish, you can change the default behavior of the copy constructor but the default behavior should be suitable in most of your cases (to create a copy, change properties to what you passed in, and copy the properties that you don’t).

Use inheritance with the with expression

Remember when I said records support inheritance, while structs do not? I wasn’t lying. We can use inheritance with our with expression. Let’s say we have a Superhero class that inherits from Person and has a new MaxSpeed property.

class Program
{
    static void Main(string[] args)
    {
        var person = new Superhero
        {
            FirstName = "Tony",
            LastName = "Stark",
            Address = "10880 Malibu Point",
            City = "Malibu",
            FavoriteColor = "red",
            MaxSpeed = 1000
        };

        var newPerson = person with { FirstName = "Howard", City = "Pasadena" };

        Console.WriteLine(newPerson.FirstName); // Howard
        Console.WriteLine(newPerson.MaxSpeed); // 1000
    }
}

public record Person
{
    public string FirstName { get; init; }
    public string LastName { get; init; }
    public string Address { get; init; }
    public string City { get; init; }
    public string FavoriteColor { get; init; }
}

public record Superhero : Person
{
    public int MaxSpeed { get; init; }
}

Hey, this works! How? Records actually have a hidden virtual method that clones the entire object. An inherited record overrides this method to call the copy constructor of that type, chaining to the copy constructor of the base record.

Implementing positional records

Sometimes, you’ll need to take a positional approach—meaning data is supplied from constructor arguments. You can definitely do this with records.

Here’s what you can do to enable your own constructor and deconstructor.

class Program
{
    static void Main(string[] args)
    {
        var person = new Person("Tony", "Stark", "10880 Malibu Point", "Malibu", "red");
        Console.WriteLine(person.FirstName); // Tony
        Console.WriteLine(person.LastName); // Stark
    }
}


public record Person
{
    public string FirstName;
    public string LastName;
    public string Address;
    public string City;
    public string FavoriteColor;

    public Person(string firstName, string lastName, string address, string city, string favoriteColor)
      => (FirstName, LastName, Address, City, FavoriteColor) = (firstName, lastName, address, city, favoriteColor);
    public void Deconstruct(out string firstName, out string lastName, out string address, 
                            out string city, out string favoriteColor) 
      => (firstName, lastName, address, city, favoriteColor) = (FirstName, LastName, Address, City, FavoriteColor);
}

You can clean this up with new simplified syntax, as that one-liner will give you construction and deconstruction out-of-the-box.

class Program
{
    static void Main(string[] args)
    {
        var person = new Person("Tony", "Stark", "10880 Malibu Point", "Malibu", "red");
        var (first, last, address, city, color) = person;

        Console.WriteLine(person.FirstName); // Tony
        Console.WriteLine(person.LastName); // Stark
        Console.WriteLine(first); // Tony
        Console.WriteLine(last); // Stark
    }
}

public record Person(string FirstName, string LastName, string Address, string City, string FavoriteColor);

Evaluating record equality

If we’re being honest, technically records are a kind of class, which also means they are technically reference types. But that’s OK—like structs, records override the Equals(object) method that every class has, to achieve the value-ness we are after. This means we can work with value-based equality as well.

For example, if we create two new objects we know that they have different references in memory, so a ReferenceEquals call (or even a == call) will return false even if they have the same values. This is different than structs—because structs are value types, this will not occur.

With records, we’ll compare values.

Watch what happens as we:

  • Create a new Person called person, Tony Stark
  • Create another Person, called newPerson, Howard Stark, with two different properties (FirstName and City)
  • Create a third Person called anotherPerson, and set anotherPerson to the same values as the original person
class Program
{
    static void Main(string[] args)
    {
        var person = new Person
        {
            FirstName = "Tony",
            LastName = "Stark",
            Address = "10880 Malibu Point",
            City = "Malibu",
            FavoriteColor = "red"
        };

        var newPerson = person with { FirstName = "Howard", City = "Pasadena" };

        Console.WriteLine(Object.ReferenceEquals(person, newPerson)); // false
        Console.WriteLine(Object.Equals(person, newPerson)); // false

        var anotherPerson = newPerson with { FirstName = "Tony", City = "Malibu" };
        Console.WriteLine(Object.Equals(person, anotherPerson)); // true
    }
}

public record Person
{
    public string FirstName { get; init; }
    public string LastName { get; init; }
    public string Address { get; init; }
    public string City { get; init; }
    public string FavoriteColor { get; init; }
}

Nice! Also, along with the Equals override records ship with a GetHashCode() override if you’re so inclined.

Wrapping up

In this post, we discussed a lot about records in C# 9. We discussed what a record is, how it compares to a struct, with expressions, inheritance, positional records, and how to evaluate equality.

As I discussed, this is continually changing—let me know how this works for you, and if things have changed since this was published. These are exciting new features and I hope you are able to see their benefit.

]]>
<![CDATA[ C# 9 Deep Dive: Pattern Matching ]]> https://www.daveabrock.com/2020/07/06/c-sharp-9-pattern-matching/ 608c3e3df4327a003ba2fe40 Sun, 05 Jul 2020 19:00:00 -0500 In the previous post of this series, we discussed the power of records. That was a heavy topic.

For something completely different, we’ll discuss improved pattern matching in C# 9. This is not a completely new feature, but something that has evolved since it was first released way back in C# 7, albeit in basic form. This Microsoft article runs down the basics of pattern matching, which improved greatly in C# 8 as well. The pattern matching works with the is operator and with switch expressions, much of which I showed off in my article C# 8, A Year Late.

This is the third post in a six-post series on C# 9 features in-depth:

This post covers the following topics.

First, get to know the C# 8 switch expression syntax

Before we get started with pattern matching enhancements in C# 9, much of it is based off the improved switch syntax from C# 8. (If you are already familiar, you can scroll to the next section.)

To be clear, they are now called switch expressions, and not switch statements. Before C# 8, you would typically have this (stolen from my C# 8 article):

public static string FindAProgrammingLanguage(string languageInput)
{
    string languagePhrase;

    switch (languageInput)
    {
        case "C#":
            languagePhrase = "C# is fun!";
            break;
        case "JavaScript":
            languagePhrase = "JavaScript is mostly fun!";
            break;
        default:
             throw new Exception("You code in something else I don't recognize.");
    };
    return languagePhrase;
}

With switch expressions, we can replace case and : with => and replace the default statement with _. That “underscore operator” is technically called a discard—a temporary, dummy variable that you want intentionally unused. This gives us a much cleaner, expression-like syntax.

Be honest: switch statements enable goto-like control flow (so we are clear on how I feel about this) and just executes code. I find the expressive style, which forces you to return a value, much better. You know that empty “well, better than a million if’s, I guess?” feeling you get with switch statements? This should make you feel better.

public static string FindAProgrammingLanguage(string languageInput)
{
    string languagePhrase = languageInput switch
    {
        "C#" => "C# is fun!",
        "JavaScript" => "JavaScript is mostly fun!",
         _ => throw new Exception("You code in something else I don't recognize."),
    };
    return languagePhrase;
}

Now that we see how this improved C# 8 switch behavior helps you, let’s move on to pattern matching.

How pattern matching helps you

Pattern matching allows you to simplify scenarios where you need to cohesively manage data from different sources. An obvious example is when you call an external API where you don’t have any control over the types of data you are getting back. Of course, typically you would create types in your application for all the different types you could get back from this API. Then, you would build an object model off those types. This is a lot of work. What’s that old joke about object-oriented programming about the gorilla and the banana?

Imagine if you are working with multiple APIs! What if you provide shipping services, and are working with all the necessary APIs (FedEx, USPS, and more). You think they all got together to form one shared data model?

To make our lives easier, let’s sprinkle some functional, C# 9 magic on top of our OO language and make your life simpler.

(In-depth pattern matching techniques are beyond the scope of this post, but do check out Bill Wagner’s excellent work.)

Our C# 8 baseline example

To build off our previous posts, let’s stick with the Iron Man theme. Here’s some C# 8 code we use to calculate a superhero’s fuel cost based on a maximum speed.

class Program
{
    static void Main(string[] args)
    {
        var superhero = new Superhero
        {
            FirstName = "Tony",
            LastName = "Stark",
            MaxSpeed = 10000
        };
        static decimal GetFuelCost(object hero) => 
        hero switch
        {
            Superhero s when s.MaxSpeed < 1000 => 10.00m,
            Superhero s when s.MaxSpeed <= 10000 => 7.00m,
            Superhero _ => 12.00m,
            _ => throw new ArgumentException("I do not know this one", nameof(hero))
        };
        Console.WriteLine(GetFuelCost(superhero)); // 7.00
    }
}

public class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string FavoriteColor { get; set; }
}

public class Superhero : Person
{
    public int MaxSpeed { get; set; }
}

Relational patterns

With C# 9, we can simplify our switch expression using relational patterns. This allows us to use the relational operators such as <, <=, >, and >=. We can simplify our program—take a look at our new GetFuelCost method:

static decimal GetFuelCost(Superhero hero) => hero.MaxSpeed switch
{
    < 1000 => 10.00m,
    <= 10000 => 7.00m,
    _ => 12.00m
};

Logical patterns

Similarly, you can use logical operators, like and, or, and not, as a complement to using relational patterns. This might be a more readable option for you if relational operators are not your jam.

Let’s try a slightly modified example with words instead of symbols:

static decimal GetFuelCost (Superhero hero) => hero.MaxSpeed switch
{
    1 or 2 => 1.00m,
    < 1000 and < 5000 => 10.00m,
    <= 10000 => 7.00m,
    _ => 12.00m
};

You can also use the not operator, as I’ve highlighted in previous posts on C# 9 improvements.

As described in the Welcome to C# 9 post by Microsoft, it’s convenient if you use the null constant pattern:

not null => throw new ArgumentException($"Not a known person: {hero}", nameof(hero)),
null => throw new ArgumentNullException(nameof(hero))

It also helps you think more clearly about negation logic. If you are used to something like this:

if (!(hero is Person)) { ...}

Your co-workers will help you, if you change it to this:

if (hero is not Person) { ... }

Wrapping up

In this post, we discussed the advantages of pattern matching, especially with coupled with powerful switch expressions, which were introduced in C# 8. We then discussed how C# 9 can help clean up your syntax with its relational and logical patterns.

Stay tuned for the next post, which discusses target typing and covariant returns in C# 9.

]]>
<![CDATA[ The .NET Stacks #6: Blazor mobile bindings, EF update, ASP.NET Core A-Z, more! ]]> https://www.daveabrock.com/2020/07/05/dotnet-stacks-6/ 608c3e3df4327a003ba2fe3e Sat, 04 Jul 2020 19:00:00 -0500 This is an archive of my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away! Subscribers don’t have to wait a week to receive the content.

On tap this week:

  • Mobile Blazor bindings
  • More Entity Framework updates (get your questions in)!
  • Dev Discussions: Shahed Chowdhuri
  • Community roundup

Mobile Blazor bindings

In this week’s ASP.NET community standup, Jon Galloway talked with Eilon Lipton about his experimental Mobile Blazor Bindings project. The project has been out in experimental mode for a while and, as Eilon said in the standup, looks to be slated for general public consumption by the end of July or so.

So, what is Mobile Blazor Bindings? (We’ll call it MBB from now on.) MBB allows you to write native, cross-platform mobile applications using the Blazor programming model. (If you don’t know Blazor yet, it’s a new ASP.NET Core technology which allows you to write C# across the stack, in most cases replacing JavaScript.) This allows you to use Blazor with Razor markup, as an alternative to the Xamarin.Forms XAML model, which might be foreign to many web developers. In February, Dylan Berry wrote about how he was able to port over a Xamarin.Forms MVVM screen, which was 188 lines - and was only 68 lines with the MBB model.

Based on the success of this project so far, a natural question to ask is what this says for the fate of Xamarin. Based on Microsoft’s past communications on MBB, this is messaged as an option. In other words: you do you. If you like to write mobile apps using XAML, continue doing that! If you’re a fan of Razor syntax and features (and the Blazor model), this is an option for you. As someone who enjoys the latter and is allergic to XAML (despite what my doctor says), it’s a big selling point for me, as it makes the barrier to entry a lot easier if you want to dip your toes into mobile app development.

For more details, check out the GitHub repo.

More Entity Framework updates (get your questions in!)

A few weeks ago, we checked in on EF Core. As things evolve quickly, and with this week’s release of EF Core 5 Preview 6, there’s still more to discuss! This release includes a lot of requested functionality, including split queries for related collections, an IndexAttribute annotation, improved query translation exceptions, IPAddress mapping, and more.

Split queries is a big one. Until this release, EF Core generates a single SQL query for any LINQ query, even when you use .Include or a projection type that returns you multiple related collections. While it ensures consistency, it can be slow-performing.

Now, you can leverage the AsSplitQuery() API. Let’s say you have a Blogger entity, which has Posts and then Tags. Now, this would be split into three different queries: one to get the blogger (by ID, likely), one to get all Posts for the blogger using an inner join on Blogger, and a third to get Tags, with joins on the Blogger and the Post.

This new feature supports all operations, including OrderBy, Skip, Take, Join, and FirstOrDefault. Take a look at the release announcement for details.

Also, I’m excited to say I have a .NET Stacks interview set up with Jeremy Likness, the Senior PM for .NET Data at Microsoft! If you have any questions about Entity Framework, or any data-related .NET stuff, let me know this week and I will forward it to him.

Dev discussions: Shahed Chowdhuri

As a huge fan of the Marvel universe, Microsoft’s Shahed Chowdhuri must feel like Captain America after publishing the last post of his ASP.NET Core A-Z blog series this week. In these posts, he wrote 26 ASP.NET Core posts in 26 weeks, from A-Z (Authentication & Authorization to Zero-Downtime Web Apps). This was all using a single sample project!

I talked with Shahed about his path to Microsoft, the evolution of the blog series, his side projects and interests and, of course, which Marvel character sums him up best. You can reach out to Shahed on Twitter and follow his site, Wake Up and Code.

Shahed Chowdhuri

What got you into software development, and Microsoft? What are you doing for Microsoft these days?

Although I took two and a half years to complete a four-year degree in Civil and Environmental Engineering, I also taught myself how to build websites and software applications while in college. I worked on my personal website and also additional websites for the Civil Engineering department, the International Students Association, and my dorm.

I realized that I could build something, see results instantly, fix issues, and then see my changes right away. I got an internship as a web application developer during my second (and final) summer vacation, and then got a full-time job offer upon graduation.

I joined Microsoft as a Tech Evangelist 6+ years ago, as I was already using my spare time for public speaking on various topics. My role has changed in recent years, and I’m currently working with our enterprise customers to collaborate on software development projects and solve real-world business problems.

What made you take on the original ASP.NET Core A-Z blog series in 2019?

In the fall of 2018, I was helping a colleague with some .NET code for a customer project (file upload into Azure Blob Storage) during a company hackathon. Instead of emailing him the code sample with an explanation, I decided to write a blog post and publish the code in a GitHub repository. To lead into this blog post, I also worked on a “Hello World” article, that was appropriately titled “Hello, ASP.NET Core!” so that I could help new developers get started with ASP.NET Core.

As a result, this was the birth of a mini series to end 2018 with a bang. Over a span of 12 weeks from October to December 2018, I published these seemingly randomly-selected topics on ASP.NET Core. Eventually, I revealed the hidden message “HAPPY NEW YEAR” just before the new year. Knowing that I couldn’t pull off the same trick twice, I decided go with a new series in 2019.

There are 52 weeks in a year, 26 weeks in half a year. This was the perfect opportunity to cover 26 different topics from A-Z in 2019, since there are also 26 letters in the alphabet.

As you kicked off a new version of the series with an approach to feature everything in a single project, did you come across anything unexpected or interesting?

Before 2019, I wasn’t sure if I could pick a topic for each letter if I stuck to ASP.NET Core. I was debating whether I should cover HoloLens, Bot Framework, Machine Learning and various other topics throughout the series. But I narrowed it down to topics that would be relevant to an ASP.NET Core developer, and also reviewed my list with Jon Galloway.

The 2019 series included code snippets that were all over the place. For the 2020 series, I decided to start with a real-world open-source web app (NetLearner) and build upon it week after week. I made sure that the web app included similar functionality across multiple web projects (MVC, Razor Pages, Blazor) with a shared library for core/infrastructure code.

What is your one piece of programming advice?

One piece of advice I would give anyone is to always keep learning. That doesn’t necessarily mean that you should learn every new programming language or framework that pops up on your radar. It could mean that you may want to dig deeper into a language you’ve already been working with. It could be some business skills that you want to pick up, to run your own software business or work as a consultant to help others run theirs.

This is only an excerpt of my talk with Shahed. Read the full interview over at my website, especially if you like Baby Groot from the Guardians of the Galaxy movies.

Community roundup

From Microsoft

Blog posts

Podcasts/videos

New subscribers and feedback

Has this email been forwarded to you? Welcome! I’d love for you to subscribe and join the community. I promise to guard your email address with my life.

I would love to hear any feedback you have for The .NET Stacks! My goal is to make this the one-stop shop for weekly updates on developing in the .NET ecosystem, so I look forward to any feedback you can provide. You can directly reply to this email, or talk to me on Twitter as well. See you next week!

]]>
<![CDATA[ C# 9 Deep Dive: Init-only features ]]> https://www.daveabrock.com/2020/06/29/c-sharp-9-deep-dive-inits/ 608c3e3df4327a003ba2fe3d Sun, 28 Jun 2020 19:00:00 -0500 Note: Originally published five months before the official release of C# 9, I’ve updated this post after the release to capture the latest updates.

A few weeks ago, we took a quick tour of some upcoming C# 9 features that will make your development life easier. We dipped our toes in the water. But now it’s time to dig a little deeper.

I’m starting a new series over the next several weeks, that will showcase all of the announced features incrementally. Then, we will tie it all together with an all-in-one app. As for the features we are showing off, we could always dig deeper by what we see in the Language Feature Status in GitHub, but the publicly-announced-at-Build features will most likely make it when .NET 5.0 launches in November 2020.

Here’s what we’ll be walking through:

A focus on immutability

A big focus on C# 9 is enabling features that empower you to make immutability easy. The rise of functional programming has showed us how error-prone mutating objects can often be. While we can have immutability in previous versions of C#, it was a little hacky.

With C# 9, the team is shipping a bunch of features that help with immutability.

Doing immutability before C# 9

So how would I do immutability before C# 9? For virtually all of your C# life, you’ve done something like the following:

public class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string FavoriteColor { get; set; }
    // and so on...
}

This gets and sets properties of my Person with no restrictions. To achieve immutability I’d modify my class to only include a get accessor:

public class Person
{
    public string FirstName { get; }
    public string LastName { get; }
    public string Address { get; }
    public string City { get; }
    public string FavoriteColor { get; }
    // and so on...
}

With that in place, I would use constructors to enforce this behavior:

public class Person
{
    public Person(string firstName, string lastName, string address, string city, string favoriteColor)
    {
        FirstName = firstName;
        LastName = lastName;
        Address = address;
        City = city;
        FavoriteColor = favoriteColor;
    }

    public string FirstName { get; }
    public string LastName { get; }
    public string Address { get; }
    public string City { get; }
    public string FavoriteColor { get; }
}

That’s great, but I can’t do this with object initializers. If I wanted to initialize an object like this…

var person = new Person
{
    FirstName = "Tony",
    LastName = "Stark",
    Address = "10880 Malibu Point",
    City = "Malibu",
    FavoriteColor = "Red"
};

…there’s nothing that prevents me from mutating after the fact:

Console.WriteLine(person.FirstName); // Tony
person.FirstName = "Howard";
Console.WriteLine(person.FirstName); // Howard

Previously, for object initialization to work, the properties must be mutable. To set values in a new Person in an immutable way, you’ll need to call the object’s constructor (in this case, as in most cases, a parameterless constructor) and then perform assignment from the setters.

Introducing the init accessor

With C# 9, we can change this with an init accessor. This means you can only create and set a property when you initialize the object. If we modify our Person model like this, we can prevent the FirstName from being changed:

public class Person
{
    public string FirstName { get; init; }
    public string LastName { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string FavoriteColor { get; set; }
    // and so on...
}

With this, that means this code will work:

var person = new Person
{
    FirstName = "Tony",
    LastName = "Stark",
    Address = "10880 Malibu Point",
    City = "Malibu",
    FavoriteColor = "Red"
};

However, when you try to execute this code:

person.FirstName = "Howard";

The compiler will not be happy.

Warning: init-only properties aren’t mandatory

The beauty of object initializers is the ability to set whatever you want, wherever you want, like so:

var person = new Person
{
    FirstName = "Tony",
    City = "Malibu",
};

If you want to come in and set an Address that has an init property on it, thinking you can do so because you haven’t set its value yet, you’re wrong. For example, I can’t come in and do this…

person.FavoriteColor = "Red";

…because the object has already been initialized.

Init accessors and read-only fields

As we just saw, init accessors can only be called when you initialize. If you wish to work with readonly fields, the mutability only applies during initialization, just as with non-read-only ones.

With this in mind, using private fields will set a value during initialization - otherwise, we will have an ArgumentNullException thrown:

public class Person
{
    private readonly string _firstName;
    private readonly string _lastName;
    private readonly string _address;
    private readonly string _city;
    private readonly string _favoriteColor;

    public string FirstName
    {
        get => _firstName;
        init => _firstName = (value ?? throw new ArgumentNullException(nameof(FirstName)));
    }
    public string LastName
    {
        get => _lastName;
        init => _lastName = (value ?? throw new ArgumentNullException(nameof(LastName)));
    }
    public string Address
    {
        get => _address;
        init => _address = (value ?? throw new ArgumentNullException(nameof(Address)));
    }
    public string City
    {
        get => _city;
        init => _city = (value ?? throw new ArgumentNullException(nameof(City)));
    }
    public string FavoriteColor
    {
        get => _favoriteColor;
        init => _favoriteColor = (value ?? throw new ArgumentNullException(nameof(FavoriteColor)));
    }
}

What’s next

In this post, we learned how to make individual properties become immutable. If you want this behavior for your entire object, you’ll want to work with records - one of the best new features in C# 9, in my opinion. Stay tuned for my next post to discuss this.

]]>
<![CDATA[ Dev Discussions - Shahed Chowdhuri talks about his ASP.NET Core A-Z blog series ]]> https://www.daveabrock.com/2020/06/28/dev-discussions-shahed-chowdhuri/ 608c3e3df4327a003ba2fe3c Sat, 27 Jun 2020 19:00:00 -0500 This is the full interview from my discussion with Shahed Chowdhuri in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away!

As a huge fan of the Marvel universe, Microsoft’s Shahed Chowdhuri must feel like Captain America after publishing the last post of his ASP.NET Core A-Z blog series this week. In these posts, he wrote 26 ASP.NET Core posts in 26 weeks, from A-Z (Authentication & Authorization to Zero-Downtime Web Apps). This was all using a single sample project!

I talked with Shahed about his path to Microsoft, the evolution of the blog series, his side projects and interests and, of course, which Marvel character sums him up best.

You can reach out to Shahed on Twitter and follow his site, Wake Up and Code.

Shahed Chowdhuri

What got you into software development, and Microsoft? What are you doing for Microsoft these days?

Although I took two and a half years to complete a four-year degree in Civil and Environmental Engineering, I also taught myself how to build websites and software applications while in college. I worked on my personal website and also additional websites for the Civil Engineering department, the International Students Association, and my dorm.

I realized that I could build something, see results instantly, fix issues, and then see my changes right away. I got an internship as a web application developer during my second (and final) summer vacation, and then got a full-time job offer upon graduation.

I joined Microsoft as a Tech Evangelist 6+ years ago, as I was already using my spare time for public speaking on various topics. My role has changed in recent years, and I’m currently working with our enterprise customers to collaborate on software development projects and solve real-world business problems.

What made you take on the original ASP.NET Core A-Z blog series in 2019?

During my evangelism days, I would mostly use my site to publish my PowerPoint slides after a meetup or conference. But I wouldn’t get a lot of traffic on the site, because I didn’t have many recent articles. As my role changed within Microsoft, I really wanted to give something back to the community again, in the form of technical writing and code samples.

In the fall of 2018, I was helping a colleague with some .NET code for a customer project (file upload into Azure Blob Storage) during a company hackathon. Instead of emailing him the code sample with an explanation, I decided to write a blog post and publish the code in a GitHub repository. To lead into this blog post, I also worked on a “Hello World” article, that was appropriately titled “Hello, ASP.NET Core!” so that I could help new developers get started with ASP.NET Core.

As a result, this was the birth of a mini series to end 2018 with a bang. As you can see in the list of article titles below, the first letter of each post spells out Happy New Year.

  • Hello, ASP .NET Core!
  • Azure Blob Storage from ASP .NET Core File Upload
  • Pages in ASP .NET Core: Razor, Blazor and MVC Views
  • Protocols in ASP .NET Core: HTTPS and HTTP/2
  • Your Web App Secrets in ASP .NET Core
  • NetLearner – ASP .NET Core Internet Learning Helper
  • EF Core Migrations in ASP .NET Core
  • Watching for File Changes in ASP .NET Core
  • Your First Razor UI Library with ASP .NET Core
  • Exploring .NET Core 3.0 and the Future of C# with ASP .NET Core
  • API Controllers in ASP .NET Core
  • Real-time ASP .NET Core Web Apps with SignalR

Over a span of 12 weeks from October to December 2018, I published these seemingly randomly-selected topics on ASP.NET Core. Eventually, I revealed the hidden message “HAPPY NEW YEAR” just before the new year. Knowing that I couldn’t pull off the same trick twice, I decided go with a new series in 2019.

There are 52 weeks in a year, 26 weeks in half a year. This was the perfect opportunity to cover 26 different topics from A-Z in 2019, since there are also 26 letters in the alphabet.

As you kicked off a new version of the series with an approach to feature everything in a single project, did you come across anything unexpected or interesting?

Before 2019, I wasn’t sure if I could pick a topic for each letter if I stuck to ASP.NET Core. I was debating whether I should cover HoloLens, Bot Framework, Machine Learning and various other topics throughout the series. But I narrowed it down to topics that would be relevant to an ASP.NET Core developer, and also reviewed my list with Jon Galloway. (Jon is a a Sr. PM on the Visual Studio Mac team who was formerly on the .NET Team, and still continues to run the .NET Community Standup livestream broadcast.)

The 2019 series included code snippets that were all over the place. For the 2020 series, I decided to start with a real-world open-source web app (NetLearner) and build upon it week after week. I made sure that the web app included similar functionality across multiple web projects (MVC, Razor Pages, Blazor) with a shared library for core/infrastructure code. I realized that I needed to provided some context for the NetLearner repository, so I also published a “2020 Prelude” series in late 2019. This mini-series provided some information on how to share code across multiple web apps, how to get started with the latest stable version of ASP .NET Core (v3.1) and what NetLearner is all about.

Other than ASP.NET Core, what other Microsoft technologies excite you/do you like to tinker with?

There are so many! Before joining Microsoft, I had used Visual Studio and C# paired with XNA to build video games for Windows and XBox 360 in my spare time.

I also used my ASP.NET skills to publish a free Sales Data Analyzer tool to help other indie developers better understand their worldwide sales data.

Over the years, I’ve had a chance to learn and tinker with ASP.NET Core, Bot Framework, Cognitive Services, Desktop apps, Entity Framework, Functions (serverless), game development with Unity, HoloLens mixed reality, IoT, JavaScript, Kinect, Logic Apps, Machine Learning…oh wait, this is turning into another A-Z series!

Do you have any other specific projects you’ve been working on that you want to show off?

Yes! I’ve been working on some early concepts for a cinematic universe visualizer. The goal is to connect the dots between all the various movies, shows, cast and crew that are a part of each cinematic universe, starting with Marvel’s MCU.

What kind of Marvel character are you?

I am Groot.

(Ed. Note: I am hereby required to show off Baby Groot.)

What is your one piece of programming advice?

One piece of advice I would give anyone is to always keep learning. That doesn’t necessarily mean that you should learn every new programming language or framework that pops up on your radar. It could mean that you may want to dig deeper into a language you’ve already been working with. It could be some business skills that you want to pick up, to run your own software business or work as a consultant to help others run theirs.

]]>
<![CDATA[ On simplifying null validation with C# 9 ]]> https://www.daveabrock.com/2020/06/24/simplified-null-validation/ 608c3e3df4327a003ba2fe3b Tue, 23 Jun 2020 19:00:00 -0500 UPDATE: Since the initial publishing of the post, the approach has changed. This post has been updated to reflect the latest news.

In my last post, I took a test drive through some C# 9 features that might make your developer life easier. In it, I mentioned using logical patterns, such as the not keyword to throw an ArgumentException, if wanted:

not null => throw new ArgumentNullException($"Not sure what this is: {yourArgument}", nameof(yourArgument))

Championed proposal: simplified null-parameter checking

As it turns out, there is a new championed proposal which appears to gain some traction called simplified null-parameter checking. Some folks have written about this, making it seem like it’s a sure thing in C# 9, so I’d like to clarify some things I learned after doing some research (and, of course, hit up the comments if I’m incorrect as things are changing frequently).

As for the proposal itself: let’s say you do what you’ve done quite a few times in regular C# code:

public void DoSomethingCool(string coolString)
{
    if (coolString is null)
    {
        throw new ArgumentNullException($"Ooh, can't do anything with {coolString}", nameof(coolString));
    }

    // proceed to do some cool things
}

Initial approach: add ! to your parameter name

In this C# 9 proposal, you can add ! to your parameter name to simplify things. Try this one instead:

public void DoSomethingCool(string coolString!)
{
    // proceed to do some cool things
}

I have mixed feelings about this proposal.

Not only are you super excited about your parameter, you’re also asking the C# parameter to trigger standard null checks for it. It is important to mention this is for runtime checks only and does not impact the type system. Therefore, the check is on the value and not the type. I love the clarity.

But, there’s a lot here:

  • Adding ! in a C-based language might confuse developers who have a mental model of the negation operator
  • It’s very single-use and not very extensible
  • There seems to be other approaches that make more sense like a [NullCheck] attribute, as was suggested, or using asserts, or even a project file directive

New approach: use !! instead

Late last week, around June 25-ish of 2020, the C# team will be changing to use !! instead (meeting notes here).

The general rules are:

  • If this is used on a parameter, throw ArgumentNullException(nameof(paramName))
  • If used elsewhere, throw InvalidOperationException. A NullOperationException might be clearer, and others have noticed this as well

This behavior could also be used as an operator. If you review the GitHub issue on the topic, Jared Parsons notes:

The decision was limited to using !! as parameter null checking. While LDM recognizes that !! could be useful as an operator, and in previous meetings we had sketched out how that would work, in this meeting we only decided on the parameter form.

I do think this is better, but not that much better. The confusion surrounding how ! is used has been eliminated, but it still isn’t an extensible solution. Adding a keyword might help here, but I personally would like to see more flexibility with more than just parameters, such as a { get; set!; } construct.

Anyway, if you look at the Language Feature Status and even the spirited issue itself, it is very much in progress. If you’re looking to simplify null checking in C# 9, for now, I would depend on using logical patterns until this is ironed out some more.

]]>
<![CDATA[ The .NET Stacks #5: gRPC-Web, play with C# 9, .NET Foundation, community roundup! ]]> https://www.daveabrock.com/2020/06/20/dotnet-stacks-5/ 608c3e3df4327a003ba2fe3a Fri, 19 Jun 2020 19:00:00 -0500 This is an archive of my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away! Subscribers don’t have to wait a week to receive the content.

On tap this week:

  • gRPC-Web for .NET
  • Try out C# 9 in LinqPad
  • Checking in on the .NET Foundation
  • Community roundup

gRPC-Web for .NET officially released this week

Microsoft announced this week that gRPC-Web for .NET is officially available, after offering it in preview in January. Before discussing what problem it solves, let’s step back with a quick primer on gRPC for the uninitiated.

gRPC is a high-performance Remote Procedure Call (RPC) framework, that is language-agnostic to boot (see the gRPC site.) If this concept is new to you, you are probably wondering one of two things (or both): (1) how does this compare to HTTP APIs, and (2) how is this different than WCF, which I have previously associated with RPC calls?

How is gRPC different than HTTP APIs?

Your typical workflow of retrieving data over the wire likely involves HTTP APIs, such as REST, with either JSON or XML. Here’s a rundown of some key differences:

  • REST APIs offer optional schema/contract support using OpenAPI/Swagger, while it is required in gRPC using a .proto definition. This file defines the contact of your gRPC services and messages. You’ve likely noticed that REST is open to a lot of … interpretation, and a contract model is a huge benefit.
  • While existing API models use regular HTTP, gRPC is designed for HTTP/2 and the performance benefits it provides, such as compression and multiplexing. While HTTP APIs can use HTTP/2, it is not designed for it by default like gRPC is.
  • gRPC offers bi-directional streaming, while HTTP APIs only offer client or server streaming.
  • gRPC offers first-class code generation support, by sharing the .proto file between client and server implementations, while HTTP APIs require additional tooling and OpenAPI support.

How is gRPC different than WCF?

So, to answer the second question, how is this different than your experience with WCF? The main use case for RPC is coding with coordination between the client and the server with the idea of a single platform without a networking dependency.

With these benefits, you can see gRPC coming in handy for microservices in need of efficiency gains and/or point-to-point real-time requirements. Where gRPC shines over WCF - in addition to what I outlined above - is not requiring the dated SOAP protocol, no .NET language dependency, and the Protobuf serialization model (an extremely efficient binary message format).

gRPC-Web for .NET: resolving browser support issue

Of course, gRPC doesn’t come without weaknesses. Human readability is an issue, as gRPC messages are encoded by default. But the real limitation is the limited browser support, if at all. The HTTP/2 support is awesome, but there isn’t a browser today that can control requests over the Web for gRPC clients. Browsers do not require HTTP/2 and aren’t mature enough to support it yet. This opens up a need to tooling that provides some gRPC browser support.

This is the need that gRPC-Web for .NET fills. gRPC-Web provides a JavaScript client that all modern browsers support, and a server proxy. The client, in turn, calls the proxy and the proxy forwards on the gRPC requests - allowing you to prevent hacking some nginx magic.

Your browser needs are likely high these days, with SPAs and Blazor, so this provides a lot of benefits. Try it out by checking out the announcement and going through the documentation, and a sample app.

Play with C# 9 today

I played around with C# 9 a little bit this week. C# 9 is slated for release with .NET 5 in November 2020. Lucky for the community, it’s a lot easier to play with the preview bits. You can now use the LinqPad tool! Just enable a checkbox in your settings, and you’re good to go. You don’t even need to install anything. Give it a shot and let me know what you think.

My main takeaway is: things are getting a lot more functional and immutable.

Checking in with the .NET Foundation

I don’t think I’m going out on a limb by saying Microsoft is going through a rough patch with the OSS community right now. The progress in Microsoft embracing OSS in the last decade, and their reputation, took a hard hit with the issues surrounding AppGet - even the biggest Microsoft fans would probably admit the communication and openness was severely lacking, and a weekend blog post skirting around things didn’t seem to help. It’s triggered several community responses like this. For those of us in the community who have backed Microsoft’s efforts, it’s a little sad.

This seems to have placed renewed focus on the .NET Foundation, showcased as an independent, non-profit organization established to support an innovative, commercially friendly, open-source ecosystem around the .NET platform. A big benefit here is community outreach, and, as Microsoft being a corporation due to shareholders, they obviously have some interest there. With board elections coming up, it’s definitely high time to make an impact (with the caveats/analysis written here).

While community folks are encouraged to participate, I’m also hoping this is Microsoft’s opportunity to become more open, honest, and communicative with the developer community.

Community roundup

We had a light week this week, but still have some good stuff below from the community.

From Microsoft

Blog posts

Podcasts/videos

New subscribers and feedback

Has this email been forwarded to you? Welcome! I’d love for you to subscribe and join the community. I promise to guard your email address with my life.

I would love to hear any feedback you have for The .NET Stacks! My goal is to make this the one-stop shop for weekly updates on developing in the .NET ecosystem, so I look forward to any feedback you can provide. You can directly reply to this email, or talk to me on Twitter as well. See you next week!

]]>
<![CDATA[ Reduce mental energy with C# 9 ]]> https://www.daveabrock.com/2020/06/18/reduce-mental-energy-with-c-sharp/ 608c3e3df4327a003ba2fe39 Wed, 17 Jun 2020 19:00:00 -0500 Note: Originally published five months before the official release of C# 9, I’ve updated this post after the release to capture the latest updates.

This is a humbling yet completely accurate fact: you spend much more time reading code than writing it. Any experienced programmer will tell you the reading-to-writing ratio is easily 5-to-1 or even 10-to-1. You’re understanding how things work. You’re hunting for bugs. You’re scrolling past code with thoughts like, “Nope, doesn’t apply … doesn’t matter, doesn’t matter …” until you have to pause and think, and spend a silly amount of time trying to understand how something works.

It could be a developer trying to be clever, or an unfortunate function with an arrow-shaped pattern … you know, a variety of things. Whatever the case, it interrupts your flow. When you think how much time you spend reviewing code, it adds up and can turn into a big annoyance.

For example, let’s say you’re trying to figure out a bug and you come across this C# 8 code.

if (!(dave is Developer))  

This is getting a little ridiculous. Nobody has time for this negation logic and double parentheses. Best case, it interrupts your flow and mental model. Worst case, you scan it and misunderstand it. I might sound crazy, I get it - this may have only taken an extra few seconds. But for a large application, hundreds of times a day? You see what I mean? Why couldn’t I do something like this?

if (dave is not Developer)

See? I completely understand this: I can keep scrolling or stop and know I’ve found my bug. If only I could do this, you think.

If you aren’t aware, you can. This syntax, and other improvements, are available in the C# 9 release, released with .NET 5 in November 2020. C# 9 has a lot, but this post is going to focus on improvements that help restore valuable mental energy that is required in a mentally exhausting profession. And before you ask: no, C# 9 isn’t full of FDA-approved health benefits, but I’ve found some great stuff that helps make code cleaner, more maintainable, and easier to understand, and prevents a lot of “wait, what?” moments.

Let’s take a look at what’s coming. This is just scratching the surface, and I’ll write about more features in-depth as I come across them. I think you’ll find the more you dive into C# 9, the more you appreciate its adoption of the functional programming, “no side effects” model.

This post covers the following topics.

Records

One of the biggest features coming out of C# 9 is the concept of records. Records allow an entire object to be immutable, meaning you can do value-like things on them. Think data, not objects.

Let’s take a Developer record:

public record Developer
{
    public string FirstName { get; init; }
    public string LastName { get; init; }
    public string PreferredLanguage { get; init; }
}

Wait, what is init doing there? That is an init-only property, also new to C# 9. Before this, your properties needed to be mutable for them to be initialized. With init accessors, it’s like set except it can only be called during object initialization.

Anyway, our record now gives us access to some other cool stuff that makes for some clean code.

Data member simplification

If we initialize our objects using constructors like this:

var dev = new Developer("Dave", "Brock", "C#");

…we can declare a record this way instead:

public record Developer(string FirstName, string LastName, string PreferredLanguage);

With-expressions

Much of your data is immutable, so if you wanted to create a new object with much, but not all, of the same data, you would do something like this (your use cases would be much more complicated, hopefully) are probably used to doing something like this in regular C# 8 with classes.

using System;

var developer1 = new Developer
{
    FirstName = "David",
    LastName = "Brock",
    PreferredLanguage = "C#"
};

// ...
var developer2 = developer1;
developer2.LastName = "Pine";
Console.WriteLine(developer2.FirstName); // David
Console.WriteLine(developer2.LastName); // Pine
Console.WriteLine(developer2.PreferredLanguage); // C#

public class Developer(string FirstName, string LastName, string PreferredLanguage);
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string PreferredLanguage { get; set; }
}

In C# 9, try a with expression instead, with your records:

using System;

var developer1 = new Developer
{
    FirstName = "David",
    LastName = "Brock",
    PreferredLanguage = "C#"
};
  
var developer2 = developer1 with { LastName = "Pine" };
Console.WriteLine(developer2.FirstName); // David
Console.WriteLine(developer2.LastName); // Pine
Console.WriteLine(developer2.PreferredLanguage); // C#

public record Developer
{
    public string FirstName;
    public string LastName;
    public string PreferredLanguage;
}

You can even specify multiple properties to just include what you need changed.

This C# 9 example above is actually an example of a top-level program! Speaking of which…

Top-level programs

This is my favorite, even if I don’t write a lot of console applications. Inside your Main method you would typically see:

using System;

public class MyProgram
{
    public static void Main()
    {
        Console.WriteLine("Hello, Wisconsin!");
    }
}

No more of this silly boilerplate code! After your using statements, do this:

using System;

Console.WriteLine("Hello, Wisconsin!");

This will need to follow the Highlander rule - there can only be one - but the same argument applies to the Main() entry method in your console applications today.

Logical patterns

OK, moving on from records (for now). With the is not pattern we used to kick off this post, we showcased some logical pattern improvements. You can officially combine any operators with and, or, and not.

A great use case would be for every developer’s battle: null checking. For example, you can easier code against null, or in this case, not null:

not null => throw new ArgumentException($"Not sure what this is: {yourArgument}", nameof(yourArgument))

New expressions for target types

Let’s say I had a Developer type that takes in a first and last name from a constructor. To create the object, I’d do something like this:

Developer dave = new Developer("Dave", "Brock", "C#");
var dave = new Developer("Dave", "Brock", "C#");

With C# 9, you can leave out the type.

Developer dave = new ("Dave", "Brock", "C#");

Playing with the C# 9 preview bits

Are you reading this before the C# 9 release in November 2020? If you want to play with the C# 9 bits, some good news: you can use the LinqPad tool to do so with a click of a checkbox - no install required!

]]>
<![CDATA[ Party in the cloud with feature flags and Azure App Configuration ]]> https://www.daveabrock.com/2020/06/17/use-feature-flags-azure-app-config/ 608c3e3df4327a003ba2fe38 Tue, 16 Jun 2020 19:00:00 -0500 This is part 4 in a four-part series on .NET native feature flags:

Throughout this blog series, we’ve done a test drive on all that native .NET feature flags have to offer. We’ve created a basic toggle, filtered view components and controller actions in ASP.NET Core, and written our own filters. However, you may have noticed something: these settings are all driven by our configuration in the appsettings.json file. So, turning these flags off and on requires an update to the configuration file and a re-deploy. You can simplify this with release variables in your pipeline with your favorite tool (like Jenkins, GitHub Actions, or Azure DevOps, for example), but this is such a drag.

In this post, we’ll be able to make your life easier by storing features in Azure App Configuration. Once you connect your App Configuration instance with your app, you’ll be able to enable/disable your feature flags, literally, with a click of a button in Azure - no configuration change or redeploy required.

In the last post, we implemented functionality to display an emergency banner on our site in case unforeseen circumstances happen (go figure). It involved a verbose application setting in appsettings.json:

{
  "FeatureManagement": {
    "EmergencyBanner": {
      "EnabledFor": [
        {
          "Name": "Microsoft.TimeWindow",
          "Parameters": {
            "Start": "01 Jan 2020 12:00:00 +00:00",
            "End": "01 Jul 2020 12:00:00 +00:00"
          }
        }
      ]
    }
  }
}

With our feature in Azure App Configuration, we can remove this from our application and manage the feature, including specific time window and whether it is on or off, right in Azure. This post will refactor what we built in the last post to a cleaner solution using Azure App Configuration.

This post covers the following content.

Ready to party in the cloud? Of course you are. Let’s get started.

Project setup

Before getting started, please refer to the the last post by implementing the emergency banner functionality.

Once you’ve done you’ll need to install the Microsoft.Extensions.ConfigurationAzureAppConfiguration NuGet package, either from the NuGet Package Manager UI in Visual Studio or the CLI.

Once that is installed, confirm your Startup.cs file looks like the following:

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllersWithViews();
    services.AddFeatureManagement().AddFeatureFilter<TimeWindowFilter>();
}

Now that we have our application set up, let’s go set up Azure App Configuration.

Set up Azure App Configuration

Before you get started with this section, confirm that you have an Azure account ready to go. Once it is, head on over to the Azure Portal at portal.azure.com. Now you’re ready to create a new Azure App Configuration instance.

Heads up! Azure is famous for continuously tweaking their UI, so screenshots may not always be up to date, but shouldn’t change too much.

Create new Azure App Configuration instance

Once you’re in the portal, do the following to create a new Azure App Configuration instance:

  1. Click the +Create a Resource button, in the top-left corner of the portal
  2. Search for App Configuration and click on the suggested result
  3. Click Create to begin the creation process
  4. Select an appropriate Azure subscription
  5. Select (or create a new) resource group
  6. Select your desired location
  7. Select your pricing tier. The Free tier should suffice for this demo  (reference this article for a full comparison)
  8. Click Review + Create, then Create, to create an Azure App Configuration instance.

After a few minutes, your instance is live. Click the Go to resource button to navigate to it.

Add feature to Azure App Configuration

With an instance of Azure App Configuration, we’re now ready to add our EmergencyBanner feature flag to it.

To add our feature flag in Azure App Configuration, perform the following steps:

  1. Click the Feature manager menu option
  2. Click +Add to add a feature
  3. Change the state to On
  4. For Key, enter EmergencyBanner.
  5. Enter some text in the Description field, such as Used to display an emergency message for delayed packages. (We left the Labels field blank, but it definitely comes in handy for grouping similar features by type or deployment environment.)
  6. Now, click +Add Filter to enter a date range for when this message will be active.
  7. For Key, enter Microsoft.TimeWindow, the namespace used in our project. Then, click the ellipses to the right, and click Edit parameters.
  8. Here, you will set two name/value pairs, and click Apply. It is crucially important that these are valid UTC dates. Also, if you are seeing this post past July 2020 (hello, future!) feel free to adjust the dates to your liking.
  • Start: “01 Jan 2020 12:00:00 +00:00”
  • End: “01 Jul 2020 12:00:00 +00:00”

Click Apply once more to create your conditional feature flag in Azure App Configuration. Here’s a snapshot of what your configuration should look like:

Time window configuration

Great! We are now ready for refactor our application to see this in action. Before you do this, however, navigate to Access keys from your Azure App Configuration instance, and copy a connection string (primary or secondary is fine).

Refactor (and simplify!) your application

Now, we’re ready to refactor our application to reflect the new value in Azure App Configuration.

Add connection string to your local secrets

How do we bring in the feature flag we created? Well, we could paste the connection string right into our appsettings.json file, but that’s neither safe nor secure. Instead, let’s use our Secret Manager. To get started with this, right-click your solution and click Manage User Secrets. That will open a blank JSON file, where you can safely store your application secrets outside of your application configuration.

Update the file to the following:

{
  "FeatureFlagsConnectionString": "<your connection string>"
}

Update host builder configuration

Next, you will update your host builder configuration to connect to your Azure Application instance. Modify your CreateHostBuilder method in Program.cs to the following:

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .ConfigureWebHostDefaults(webBuilder =>
            webBuilder.ConfigureAppConfiguration((hostingContext, config) =>
            {
                var settings = config.Build();
                config.AddAzureAppConfiguration(options =>
                {
                    options.Connect(settings["FeatureFlagsConnectionString"]);
                    options.UseFeatureFlags();
                });
            })
        .UseStartup<Startup>());

What did you do here? At startup, you are connecting to your Azure App Configuration instance, with your feature flag that you specified in the Secret Manager. (For performance, if you are loading a bunch of feature flag configurations, you should consider using a sentinel key here.)

Change view

In the previous post, we had a model property that checked if a flag was active. Now, we can just do a simple check in Views/Home/Index.cshtml:

@inject Microsoft.FeatureManagement.IFeatureManager featureManager

    <div class="text-center">
        @if (await featureManager.IsEnabledAsync(FeatureFlags.EmergencyBanner))
        {
            <div class="alert alert-warning" role="alert">
                Because of unexpected delays, your deliveries might take longer. Thank you for your patience.
            </div>
        }
    </div>

Excellent! Before we run this, let’s clean up what we don’t need.

Clean up controller and configuration

Remember this mess in appsettings.json? Delete it, you don’t need it!

{
  "FeatureManagement": {
    "EmergencyBanner": {
      "EnabledFor": [
        {
          "Name": "Microsoft.TimeWindow",
          "Parameters": {
            "Start": "01 Jan 2020 12:00:00 +00:00",
            "End": "01 Jul 2020 12:00:00 +00:00"
          }
        }
      ]
    }
  }
}

Also, in your HomeController.cs, you can change back the default action to its default state:

public IActionResult Index()
{
    return View();
}

You can also delete IndexViewModel.cs, if you prefer.

Now, run the app!

After turning on flag

Next steps

I hope you enjoyed walking through setting feature flags in Azure App Configuration. Working with the Azure UI was great, but to take it to the next level consider scripting this using the Azure CLI. Microsoft has some sample scripts to help you get started.

This completes my series on exploring .NET feature management. I hope you enjoyed it. I only scratched the surface, admittedly - go explore for yourself and let me know how you’re using it!

]]>
<![CDATA[ The .NET Stacks #4: EF Core, PresenceLight, community roundup! ]]> https://www.daveabrock.com/2020/06/14/dotnet-stacks-4/ 608c3e3df4327a003ba2fe37 Sat, 13 Jun 2020 19:00:00 -0500 This is an archive of my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away! Subscribers don’t have to wait a week to receive the content.

A busy issue this week, as we discuss:

  • Checking in on Entity Framework Core
  • PresenceLight with Isaac Levin
  • Community roundup

Checking in on Entity Framework Core

The Entity Framework team had a community standup this week. I believe the EF standups are new and it’s a great way for the community to get more visibility into what the team is working on - as EF is a crucial piece of the .NET ecosystem with a very demanding audience (this is data access, after all).

They chatted with Erik Ejlskov Jensen - known in the community as ErikEJ - and he showed off his EF Core Power Tools. These tools allow GUI-based EF Core support in Visual Studio. The use cases include reverse engineering existing databases, creating migrations, and creating diagrams of your models. Because the EF team is focused on cross-platform support, their tooling is focused on a streamlined command-line interface (CLI)/PowerShell toolset. As such, with no native GUI support for EF Core in Visual Studio, this fills a great need - and with almost 90k downloads, you may already be familiar.

The team also previewed some new EF Core 5.0 bits, slated for release when .NET 5.0 ships in November. They discussed using ToLog(), a quick-and-dirty way to get up and running with basic EF logging with no logging dependencies required.

When EF Core was released with a limited feature set of EF6 capabilities, they’ve been swamped with developers screaming, with varying levels of politeness, “when will this be in EF Core?” (I admit, I took a turn myself.) I do think it’s important to point out that with engaging with the community, the team is giving the community a better idea of what’s on the radar. For example, if you look at the EF Core 5.0 plan, you’ll see the top 3 most requested features are being addressed:

Dev discussions: Isaac Levin

Introducing a new feature where I talk to folks from Microsoft and across the community about what they’re working on.

If you watched Scott Hanselman’s keynote at Microsoft Build (you know, the “self-hosted” one), he showed off changing room lighting to your status in Teams (or Slack, Skype, etc.) and matching any background colors to your smart lights with the click of a button! This was brought to you by PresenceLight, a WPF app developed by Microsoft’s Isaac Levin. I caught up with Isaac to talk about his path to Microsoft, the development of the app, and advice for developers.

Isaac Levin

What motivated you to develop PresenceLight?

PresenceLight was something I had been thinking of for a bit. I wrote about it in my blog, but mostly it centered around there being no real solution to push your Teams status (called Presence) to a smart light. There are solutions that are tethered to your machine via USB or a Bluetooth dongle, but nothing that runs independently. When the Microsoft Graph team made Presence available from an API, I started working on a solution pretty quickly.

Can you walk through a quick high-level architecture of your solution?

The solution is a WPF application running on .NET Core 5, which is in preview. In summary: the end user opens the app and gets prompted to log in to Microsoft 365. My application gets a token from Azure Active Directory and makes subsequent requests to the Graph API to get the user’s profile info and presence. The app polls the API based on the user settings and broadcasts that presence to every enabled light. Check out more in-depth things at the wiki on GitHub.

Any future plans for PresenceLight? Are you going to Blazor all the things?

One of the things I realized very quickly is that WPF running on Windows is a bit of a blocker for lots of developers. I wanted to create a solution that could run on a Mac or Linux…and I came to the idea if I had an ASP.NET Worker running …that does polling and light broadcasting, I wouldn’t need a UI always open, and wouldn’t have UI thread blocking issues to worry about.

I built a server-side Blazor front-end that would act as the login mechanism to Azure AD and allow the user to manage configuration, with all the lifting being done by the Worker. Because both are ASP.NET Core projects, they can actually run in the same project, so there was one endpoint to use.

I’m also leveraging .NET Core 5 single file executable publishes, which allow the built payload to be very minimal. Right now, you can get a build from the Releases page and it will just work with Windows, Mac, Linux, and WSL2.

What is your one piece of programming advice?

The one piece of advice I have is to not worry about your quality as a developer. For a long time in my career (and still to this day) I don’t think I am any good. But honestly, whatever you think isn’t probably true. Are there bad developers? Of course. But there are far more good developers to think they are bad.

The only way you fix this is to alter your mindset, and write code and learn. Take an opportunity to learn new things, watch dev streamers, read docs, whatever you can to learn. Slowly you will realize the only thing that was causing your imposter syndrome was in your head.

For more information on PresenceLight, check out the GitHub repo (where you can find the releases), Isaac’s blog post, and of course you can always contact him on Twitter.

Check out the full interview at my site.

Odds and ends

Microsoft introduced this week a “Web Live Preview” Visual Studio extension for .NET Framework customers to get that fine, fine “hot reload” functionality. If it goes well, we may see it working with .NET Core and Blazor as well.

ASP.NET Core topped the TechEmpower performance charts for plaintext responses per second.

The .NET Foundation announced their 2020 election this week.

Community roundup

From Microsoft

Blog posts

Podcasts/videos

New subscribers and feedback

Has this email been forwarded to you? Welcome! I’d love for you to subscribe and join the community. I promise to guard your email address with my life.

I would love to hear any feedback you have for The .NET Stacks! My goal is to make this the one-stop shop for weekly updates on developing in the .NET ecosystem, so I look forward to any feedback you can provide. You can directly reply to this email, or talk to me on Twitter as well. See you next week!

]]>
<![CDATA[ Dev Discussions - Isaac Levin talks PresenceLight ]]> https://www.daveabrock.com/2020/06/13/dev-discussions-isaac-levin/ 608c3e3df4327a003ba2fe36 Fri, 12 Jun 2020 19:00:00 -0500 This is the full interview from my discussion with Isaac Levin in my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away!

If you watched Scott Hanselman’s keynote at Microsoft Build (you know, the “self-hosted” one), he showed off changing room lighting to your status in Teams (or Slack, Skype, etc.) and matching any background colors to your smart lights with the click of a button!

This was brought to you by PresenceLight, a WPF app developed by Microsoft’s Isaac Levin. I caught up with Isaac to talk about his path to Microsoft, the development of the app, and advice for developers.

For more information on PresenceLight, check out the GitHub repo (where you can find the releases), Isaac’s blog post, and of course you can always contact him on Twitter.

Isaac Levin

Can you walk me through your path to working at Microsoft?

I graduated college in 2010 with no real interest in joining Microsoft, mostly due to imposter syndrome. After eight years bouncing around web developer jobs and consulting, I went to Build, where I met up with an old colleague who had just received an offer to join. He was an MVP and told me all the great things Microsoft had been doing. It piqued my interest.

A few months later, by chance, I was contacted by a recruiter and eventually joined in January of 2018. My first role was a customer-facing developer advocate-type role, part of Microsoft Services (not straight consulting). Then, in November 2019, I moved into my current role as Product Marketing Manager of end-to-end technical content for Developer Tools and DevOps. My job focuses around telling the complete developer story to Azure through content and demos.

What motivated you to develop PresenceLight?

PresenceLight was something I had been thinking of for a bit. I wrote about it in my blog, but mostly it centered around there being no real solution to push your Teams status (called Presence) to a smart light.

There are solutions that are tethered to your machine via USB or a Bluetooth dongle, but nothing that runs independently. When the Microsoft Graph team made Presence available from an API, I started working on a solution pretty quickly.

I was under the impression, from how this was introduced in Scott Hanselman’s Build keynote, that the PresenceLight’s main draw is syncing Philips Hue bulbs with your Microsoft Teams status. Are there more capabilities than that?

The keynote was actually focused around LIFX lights, as he used those and we partnered with them on a few other things during Build. PresenceLight currently supports Hue, LIFX, YeeLight, and Xiaomi. I would love there to be more. If anyone has some other smart lights they want to see, I recommend they send a PR in.

Can you walk through a quick high-level architecture of your solution?

The solution is a WPF application running on .NET Core 5, which is in preview. In summary: the end user opens the app and gets prompted to log in to Microsoft 365. My application gets a token from Azure Active Directory and makes subsequent requests to the Graph API to get the user’s profile info and presence. The app polls the API based on the user settings and broadcasts that presence to every enabled light. Check out more in-depth things at the wiki on GitHub.

What was the hardest part of getting this up and running?

The hardest part was setting up and configuring the auth. Because I work for a very large company, the process to create a tenant at the time was challenging (IT review and such). But now, the Presence API no longer requires admin consent, drastically lowering the barrier to entry. Also, I had never built a real WPF app before, so that was a fun experience.

Any future plans for PresenceLight? Are you going to Blazor all the things?

Right now, WPF PresenceLight is basically done, and I’m accepting new functionality via issues or pull requests. One of the things I realized very quickly is that WPF running on Windows is a bit of a blocker for lots of developers.

I wanted to create a solution that could run on a Mac or Linux, and I asked a few friends on the .NET team for some thoughts. I came to the idea if I had an ASP.NET Worker running - think of a Windows service or a Linux daemon - that does polling and light broadcasting, I wouldn’t need a UI always open, and wouldn’t have UI thread blocking issues to worry about.

My original POC was just that, with a settings file to manage configuration. I quickly realized that was bad UX. I built a server-side Blazor front-end that would act as the login mechanism to Azure AD and allow the user to manage configuration, with all the lifting being done by the Worker. Because both are ASP.NET Core projects, they can actually run in the same project, so there was one endpoint to use.

I’m also leveraging .NET Core 5 single file executable publishes, which allow the built payload to be very minimal. Right now, you can get a build from the Release page and it will just work with Windows, Mac, Linux, and WSL2.

I see the cross-platform version to be a more technical path for PresenceLight, as the WPF app is easy to install and run. It is available on almost every package deployment frameworks and it just works.

Right now, I’m working on how to step up the experience. A lot of technical folks like Raspberry Pis, and I have a working example that I use at home to run PresenceLight on a Raspberry Pi over my local network. I’m still trying to figure out a way to make this distributable, as right now everything is tied to a particular IP address, and forcing folks to do that is a bad experience. After I figure that out, who knows?

What is your one piece of programming advice?

The one piece of advice I have is to not worry about your quality as a developer. For a long time in my career (and still to this day) I don’t think I am any good. But honestly, whatever you think isn’t probably true. Are there bad developers? Of course. But there are far more good developers to think they are bad.

The only way you fix this is to alter your mindset, and write code and learn.

Take an opportunity to learn new things, watch dev streamers, read docs, whatever you can to learn. Slowly you will realize the only thing that was causing your imposter syndrome was in your head.
]]>
<![CDATA[ The .NET Stacks #3: Native feature flags, local Kubernetes, community roundup! ]]> https://www.daveabrock.com/2020/06/08/dotnet-stacks-3/ 608c3e3df4327a003ba2fe35 Sun, 07 Jun 2020 19:00:00 -0500 This is an archive of my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away! Subscribers don’t have to wait a week to receive the content.

In this week’s issue we will be talking about:

  • Native .NET feature flags
  • Working with Kubernetes clusters locally in Visual Studio
  • Community roundup

Native .NET feature flags

Have you implemented feature flags (or feature toggles) in your applications? They’re a great way to easily enable and disable system features and behaviors without changing code. They can generate complexity if not managed well, but the benefits are great when it comes to A/B testing, gradual rollouts, and managing long-living branches. Martin Fowler has a good language-agnostic overview on the topic.

Until somewhat recently, to get this capability in .NET you had to either roll your own solution or depend on an external library such as LaunchDarkly and Esquio. You can now use native Microsoft libraries to accomplish much of this functionality. Microsoft provides the Microsoft.FeatureManagement library (supported by .NET Standard 2.0, meaning compatibility with .NET Framework!) and Microsoft.FeatureManagement.AspNetCore. I’ve noticed a lot of folks in the community are writing about it (including yours truly!), as it’s easy to use and has a pretty solid feature set. The external libraries still hold value with their advanced functionality and support but if you need something to accomplish most of what you need, all on top of ASP.NET configuration, you’ve got it

The libraries are open-sourced, where you can see much of the benefits:

  • Pre-packaged feature filters, and the ability to write your own with the IFeatureManager interface
  • Easy service registration in ASP.NET Core middleware
  • Ability to gate features at the controller/action level with FeatureGate
  • Support for a <feature> tag helper to get away from the @if Razor syntax and to assist with conditional rendering
  • HttpContext support
  • Ability to execute flags if a set of flags are specified, and the ability to check for missing feature filters For details, check out the docs.

New Visual Studio feature: Local Process with Kubernetes

Microsoft rolled out a new preview of Visual Studio 2019 this week (16.7 Preview 2, to be exact - if you aren’t aware, you can install these with regular Visual Studio side-by-side). It’s easy to gloss over another preview, but the VS team has introduced a new feature feature: Local Process with Kubernetes.

Kubernetes, love it as we might, is Latin for “overly complicated.” (It’s true, you don’t need to Google it.) Even if you have a solid workflow down, it involves updating your source code, building a container image, and deploying to your cluster - and forcing you to manage Dockerfile and Kubernetes manifest files. If you have a lot of microservices, each with their own data stores, it’s a real, actual headache.

Using Local Process with Kubernetes, you connect your machine to your Kubernetes cluster and don’t need to compile all your dependencies every single time. Environment variables, connection strings, and the like are inherited by the local microservice code. Under the covers, the feature redirects traffic between your connected cluster and your development machine.

It looks promising. If you try it out, let me know how it goes (you can reply to this email, or the Twitter links are below).

(You can also use this feature in Visual Studio Code.)

Odds and ends

Authentication and security is not easy, and a pain point for those with simple scenarios in ASP.NET Core. It is designed to be standard-based, but the learning curve is real. The community is listening.

Need a Blazor fix? Register for Blazor Day on June 18.

Community roundup

From Microsoft

Blog posts

Podcasts/videos

New subscribers and feedback

Has this email been forwarded to you? Welcome! I’d love for you to subscribe and join the community. I promise to guard your email address with my life.

I would love to hear any feedback you have for The .NET Stacks! My goal is to make this the one-stop shop for weekly updates on developing in the .NET ecosystem, so I look forward to any feedback you can provide. You can directly reply to this email, or talk to me on Twitter as well. See you next week!

]]>
<![CDATA[ Implement custom filters in your ASP.NET Core feature flags ]]> https://www.daveabrock.com/2020/06/07/custom-filters-in-core-flags/ 608c3e3df4327a003ba2fe34 Sat, 06 Jun 2020 19:00:00 -0500 So far in this series, we introduced Microsoft.FeatureManagement as a way to manage feature flag functionality in your .NET applications and used the Microsoft.FeatureManagement.AspNetCore library to conditionally filter HTML components and apply filters across controller action methods and classes.

These examples are great to show off how to get started with native feature flags, but you might be wondering if you can do something more powerful than simply checking booleans. And, you can! Using feature filters you can use three feature filters provided out of the box, and can also write your own.

In this post, I’ll show you how to:

  • Use the TimeWindowFilter to conditionally show a feature based on a time range
  • Write a custom filter to detect a user’s browser to partially roll out a feature

This is part 3 in a four-part series on .NET native feature flags:

This post contains the following content.

Implement IFeatureFilter using provided filters

The Microsoft.FeatureManagement library includes support for the IFeatureFilter interface, which allows you to define whether criteria is met to enable (or disable) a feature. Included with this interface are three filters you can plug in without custom code:

In this example, we’ll be showing off the TimeWindowFilter. Before getting started, make sure that you have set up our sample app as we did in the first post in this series.

Implement TimeWindowFilter

The TimeWindowFilter does exactly as its name suggests. You provide a start and end time in your configuration, as DateTime objects, and if the current date is in the window the feature flag will be activated.

Especially in these times, you might see a scenario where you’d like to have a temporary banner on your page that says something like, Because of unexpected delays, your deliveries might take longer. Thank you for your patience. Let’s set that up.

Update Startup class

Previously in the series, you added the following to the ConfigureServices method in Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
  //
  services.AddFeatureManagement();
}

You’ll need to modify this slightly to implement feature filters. Update the call with AddFeatureFilter to update the service to use an IFeatureManagementBuilder instance.

using Microsoft.FeatureManagement;
using Microsoft.FeatureManagement.FeatureFilters;

public void ConfigureServices(IServiceCollection services)
{
  //
  services.AddFeatureManagement().AddFeatureFilter<TimeWindowFilter>();
}

Update FeatureFlags.cs

As we’ve done throughout this series, you’ll want to update our list of feature flags in our FeatureFlags.cs to avoid hard-coding strings in our controller:

public static class FeatureFlags
{
  public const string EmergencyBanner = "EmergencyBanner";
}

Add TimeWindow configuration to appsettings.json

Under the FeatureManagement section in your appsettings.json, you’ll add a section for our EmergencyBanner.

Underneath the EmergencyBanner property, define an EnabledFor array. This array takes elements where, if any are enabled, the flag will be set to true. For each item, you’ll need to specify a Name value and any Parameters. The Parameters object is optional and allows you to pass in any parameter values required by the feature in question.

Here’s what the FeatureManagement section looks like:

{
  "FeatureManagement": {
    "EmergencyBanner": {
      "EnabledFor": [
        {
          "Name": "Microsoft.TimeWindow",
          "Parameters": {
            "Start": "01 Jan 2020 12:00:00 +00:00",
            "End": "01 Jul 2020 12:00:00 +00:00"
          }
        }
      ]
    }
  }
}

If users are viewing this page between January 1 and July 1 of 2020, the feature flag will be activated and the users will see the warning.

Update the M, the C, and the V

Don’t worry, the hard part is over. Let’s update our IndexViewModel.cs to take a boolean:

namespace FeatureFlags.Models
{
    public class IndexViewModel
    {
        public bool ShowEmergencyBanner { get; set; }
    }
}

Now, we can update our controller to update the model value if the feature flag is enabled. In HomeController.cs, call IsEnabledAsync. When called, the IFeatureManager will check our configuration to see if the feature is valid.

public async Task<IActionResult> Index()
{
  var indexViewModel = new IndexViewModel()
  {
    ShowEmergencyBanner = await _featureManager.IsEnabledAsync(FeatureFlags.EmergencyBanner)
  };

  return View(indexViewModel);
}

Now, let’s update our Home/Index.cshtml view to conditionally render the warning.

@model IndexViewModel

<div class="text-center">
  @if (Model.ShowEmergencyBanner)
  {
    <div class="alert alert-warning" role="alert">
      Because of unexpected delays, your deliveries might take longer. Thank you for your patience.
    </div>
   }
</div>

Fire up the app and see your hard work. Good job, you.

After turning on flag

Write a custom filter

With a handle on how we use the shipped filters, we’re now ready to write our own. Let’s imagine a scenario where you only want a subset of users to access your feature. You could do the PercentageFilter, but we can also detect a user’s browser and, say, only ship a feature to Chrome users. Let’s give it a shot.

Parameters, a second look

When we write our own, we’ll get to see the structure of what you need to specify in appsettings.json.

"MyFilter": {
      "EnabledFor": [
        {
          "Name": "MyFilterName",
          "Parameters": {
            "ParametersArray": [ ]
          }
        }
      ]
    }

For our specific case, we want to only activate the flag for Chrome users. We’ll specify this as an array underneath Parameters in appsettings.json. As such, this is what our new section should look like.

"BrowserFilter": {
  "EnabledFor": [
  {
    "Name": "BrowserFilter",
    "Parameters": {
      "AllowedBrowsers": [ "Chrome" ]
    }
  }]
}

You’ll notice that EnabledFor and the AllowedBrowsers are both arrays. This means you can specify a set of conditions under EnabledFor and multiple browsers - we are doing one apiece to keep it simple.

Now, after we update appsettings.json we will create BrowserFilterSettings that we’ll use to store the AllowedBrowsers.

namespace FeatureFlags
{
  public class BrowserFilterSettings
  {
    public string[] AllowedBrowsers { get; set; }
  }
}

As we’ve done throughout the series, let’s add a constant to FeatureFlags.cs:

public const string BrowserFilter = "BrowserFilter";

Creating your filter

Now we’re ready to write our actual filter - let’s create BrowserFilter.cs. You’ll need to implement the IFeatureFilter interface, which requires a single EvaluateAsync call.

Here’s what our new class looks like. Take a look at the code before I explain what it does.

using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Configuration;
using Microsoft.FeatureManagement;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace FeatureFlags
{
    [FilterAlias(FeatureFlags.BrowserFilter)]
    public class BrowserFilter : IFeatureFilter
    {
        private readonly IHttpContextAccessor _httpContextAccessor;
        public BrowserFilter(IHttpContextAccessor httpContextAccessor)
        {
            _httpContextAccessor = httpContextAccessor;
        }

        public Task<bool> EvaluateAsync(FeatureFilterEvaluationContext context)
        {
            var userAgent = _httpContextAccessor.HttpContext.Request.Headers["User-Agent"].ToString();
            var settings = context.Parameters.Get<BrowserFilterSettings>();
            return Task.FromResult(settings.AllowedBrowsers.Any(userAgent.Contains));
        }
    }
}

Looking at the EvaluateAsync method, you’ll notice:

  • We inspect the headers of the HTTP request to get the User-Agent value, which notes the client browser
  • Then, we get the settings file, BrowserFilterSettings.cs, to see the allowed browsers, which were pulled by the configuration
  • Returns true if the userAgent contains a value in the AllowedBrowser array

As some housekeeping, we inject an instance of IHttpContextAccessor, which will need to be added here.

Now, you’ll need to add that and a reference to your new feature filter in Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
  services.AddHttpContextAccessor();
  services.AddFeatureManagement().AddFeatureFilter<BrowserFilter>();

  services.AddControllersWithViews();
}

Add a flag check in your view

Now, if you add a check similar to the following, you will see your filter in action!

<feature name="@FeatureFlags.BrowserFilter">
  <div>We are using Chrome.</div>
</feature>

Nice work. Check in with us next week, as we’ll help to make this production ready with Azure App Configuration.

]]>
<![CDATA[ The .NET Stacks #2: Project Tye, YARP, news, and community! ]]> https://www.daveabrock.com/2020/05/31/dotnet-stacks-2/ 608c3e3df4327a003ba2fe33 Sat, 30 May 2020 19:00:00 -0500 This is an archive of my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away! Subscribers don’t have to wait a week to receive the content.

Happy Monday, everyone. I have to be honest, it feels awkward diving right into .NET after the last week or so, especially considering what’s happening in America right now. I’ll just say this before I get to it: I hope everyone is okay.

Recovering from Build

Build was great, but it was impossible trying to keep up with all the announcements. It’s always great to geek out on new advancements in the .NET ecosystem, but I knew I couldn’t focus on everything. Despite that, I’ve been able to unwind from the madness and think about two updates that developers might have missed but deserve some attention: Project Tye and YARP.

Project Tye

When designed well, microservices are great. At the risk of sounding like a Gartner consultant, they help you with scalability, data isolation, allow teams to pick the best technology for the job (but not all jobs), and enable small and focused teams to write loosely coupled services that can be updated and deployed easily.

But: let’s say that you’ve decided to quit your job and become Amazon’s newest competitor. You write up a quick web app and some services as REST APIs—an account management service. A shipping service. An inventory service. You also implement the proxy design pattern and create an API gateway to handle communication between these services and your web application. Because you love .NET, you’ve coded all this in a .NET Core MVC web app with four Core APIs behind it. You pat yourself on the back, fire off an email to your (former) boss to let him know you’re going to be rich, then call it a night.

In the morning, you start debugging your services and quickly wonder if you can recall that email. You’ve just signed up for managing the infrastructure and development lifecycle of five applications—and as a result, debugging can be a real pain. And you haven’t even considered how you will deploy and manage your distributed application.

This is where Project Tye comes in, giving us hope for a better way: an experimental .NET Foundation project that promises to make “developing, testing, and deploying microservices and distributed applications easier.”

The goals of Project Tye are:

  • Simplifying the development of microservices by providing the capability to run multiple services with a single command, as well as providing containers for dependencies and providing address discovery using simple conventions
  • Automating deployment of .NET applications to Kubernetes by generating manifests with minimal configuration using a single configuration file

This project is in experimental mode but that also means you can make a big impact by getting involved. Check out the Microsoft blog post and the GitHub repository for details. As for me, I’m excited about a simplified development process and looking forward to not having to run multiple Visual Studio applications to nail down an issue.

YARP

Do you use a reverse proxy? If you aren’t familiar with them, a few common use cases/scenarios:

  • Security: a big use case, it takes requests and performs authentication/authorization
  • Load balancing: a server can sit in front of your backend servers and distribute requests efficiently
  • Optimization: they can manage compressing inbound/outbound data and provide caching, offloading intensive tasks from your core application servers

YARP—which stands for YARP: A Reverse Proxy—is a new project that is focused on creating a reverse proxy server. According to the README at the YARP repo, Microsoft saw a lot of their internal teams either building a proxy or asking for one, so they got to work on a common solution.

If you’re wondering “why not stick with what I have now, like nginx?” Feel free, Microsoft says, but this is suited for those who want a super-fast proxy server using infrastructure from the ASP.NET and .NET ecosystems. David Fowler, Microsoft’s partner software architect, promises to provide faster performance than nginx. Proxies like nginx can be configuration headaches, for sure, so providing a simplified configuration model using .NET assets will definitely be big selling points.

For more details, check out the Microsoft blog post and the GitHub repository. Much like with Tye, it is early and your feedback is appreciated and encouraged.

Odds and ends

This week, we were able to see the results of the 2020 Stack Overflow Developer Survey. About 65,000 developers took part and this is what we learned:

  • Rust is again the “most loved” language—despite most people having no experience in it. TypeScript came in at #2, and C# at #8.
  • ASP.NET Core was the #1 “most loved” Web framework at 70.7%, outranking React, Vue, Express, and Gatsby, and also the #1 “most loved” other framework

-

The .NET Foundation provided their April/May 2020 update this week. It’s worth checking out the update to see some new projects.

A few highlights:

  • Docker.DotNet, a library to interact with Docker Remote API endpoints in your .NET applications
  • Python.NET, a package that gives Python programmers nearly seamless integration with the .NET 4.0+ CLR on Windows and Mono runtime on Linux and OSX
  • FlubuCore, a cross platform build and deployment automation system that allows you to define your build and deployment scripts in C# using an intuitive fluent interface

There’s been plenty of demos on Blazor and all its capabilities, but I really recommend going through the latest ASP.NET standup, where the team showcases a demo app that isn’t just a typical “hello world” app that gets your feet wet. It’s called CarChecker and shows off authentication, offline support, localization, browser storage, and more. It even has image recognition from ML.NET! Highly recommended.

Community roundup

Announcements

Blog posts

Podcasts/videos

New subscribers and feedback

Has this email been forwarded to you? Welcome! I’d love for you to subscribe and join the community. I promise to guard your email address with my life.

I would love to hear any feedback you have for The .NET Stacks! My goal is to make this the one-stop shop for weekly updates on developing in the .NET ecosystem, so I look forward to any feedback you can provide. You can directly reply to this email, or talk to me on Twitter as well. See you next week!

]]>
<![CDATA[ Using Microsoft.FeatureManagement.AspNetCore to filter actions and HTML ]]> https://www.daveabrock.com/2020/05/30/introducing-feature-management-aspnetcore/ 608c3e3df4327a003ba2fe32 Fri, 29 May 2020 19:00:00 -0500 In our previous post, we introduced Microsoft.FeatureManagement as a way to manage feature flag functionality in your .NET applications. As mentioned in the post, this library is compatible with any .NET Standard application.

In this post, we’ll kick things up a notch and show how you can pair this with an ASP.NET Core-only library, Microsoft.FeatureManagement.AspNetCore, to perform the following tasks with minimal required configuration:

  • Filter out controller action methods and classes
  • Conditionally filter out HTML in your views

This is part 2 in a four-part series on .NET native feature flags:

This post contains the following content.

Set up the sample application

If you wish to follow along, refer to the previous post for details on how we set up our sample application. In addition, you’ll need to add the Microsoft.FeatureManagement.AspNetCore library by performing one of the following two steps:

  • From the NuGet Package Manager (easiest way is to right-click your solution in Visual Studio, then clicking Manage NuGet Packages), search for and install Microsoft.FeatureManagement.AspNetCore
  • From the dotnet CLI, you can execute the dotnet add package Microsoft.FeatureManagement.AspNetCore command

Here is what the project file looks like now:

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>netcoreapp3.1</TargetFramework>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.FeatureManagement" Version="2.0.0" />
    <PackageReference Include="Microsoft.FeatureManagement.AspNetCore" Version="2.0.0" />
  </ItemGroup>
</Project>

Filter out controller action methods and classes

To better manage your feature flags, you can filter from the level of an action method, or even a class, using the FeatureGate attribute.

Before we get to the fun bits, let’s set up another feature flag in our configuration. As we did in the previous post, add a new feature flag in appsettings.json. Let’s call it FlagController. Here’s what our configuration looks like now:

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "FeatureManagement": {
    "WelcomeText": true,
    "FlagController": true 
  },
  "AllowedHosts": "*"
}

Note that, by default, we will set this one true for a cleaner demo. Then, let’s go to our FeatureFlags.cs file and define what we created. A look at the updated file:

public static class FeatureFlags
{
    public const string WelcomeText = "WelcomeText";
    public const string FlagController = "FlagController";
}

Finally, add a new controller called FlagController and make sure it implements the Controller interface. The file should look like this:

using Microsoft.AspNetCore.Mvc;
using Microsoft.FeatureManagement.Mvc;

namespace FeatureFlags.Controllers
{
    public class FlagController : Controller
    {
        public IActionResult Index()
        {
            return View();
        }
    }
}

Finally, in your Views folder, create a Flag folder, and then a Flag.cshtml file with the following content. (You could have scaffolded this, but since we aren’t using Entity Framework, this is probably faster.)

<div class="text-center">
    <h1 class="display-4">The controller flag is enabled!</h1>
    <p>Learn about <a href="https://docs.microsoft.com/aspnet/core">building Web apps with ASP.NET Core</a>.</p>
</div>

To better manage your feature flags, you can filter from the level of an action method, or even a class, using the FeatureGate attribute. This attribute is super flexible: you can pass in multiple features and decide to gate actions if any or all of the features are enabled. Take a look at the FeatureGateAttribute documentation for details.

For our example, we will annotate [FeatureGate(FeatureFlags.FlagController)] to our controller class. Any action methods in this class will only be accessible if this flag is set to true. Here’s the latest FlagController.cs.

using Microsoft.AspNetCore.Mvc;
using Microsoft.FeatureManagement.Mvc;

namespace FeatureFlags.Controllers
{
    [FeatureGate(FeatureFlags.FlagController)]
    public class FlagController : Controller
    {
        public IActionResult Index()
        {
            return View();
        }
    }
}

If you run your app, and navigate to http://localhost:<port>, you will see the new view rendered appropriately.

Now, if you go to your appsettings.json and set the flag to false, then reload the page you’ll get an ugly 404 page. It’s ugly, but it means it works—great!

Of course, if you want to provide a better experience for your users, you can add something like UseStatusCodePages middleware to render something friendlier.

Conditionally render HTML in your views

Let’s say you want to only render a component on your site if a certain condition applies: for example, that someone is viewing a beta environment. You could definitely mimic what we did in our controllers, right in your razor view, with something like this:

@if (_featureManager.IsEnabled(FeatureFlags.Beta))
{
  <div class="beta">We are in beta!</div>
}

To make it easier on your life, you can use a specialized tag helper for this, the FeatureTagHelper.

So, instead, you can try something like this:

<feature name="@FeatureFlags.Beta">
    <div class="beta">We are in beta!</div>
</feature>

You can definitely take it further than this as well—the tag helpers allow you to have alternate content displayed in the feature is disabled (using the Negate property), and you can also require ANY or ALL from a feature set (using the Requirement property). Take a look at the documentation for details.

]]>
<![CDATA[ Introducing the Microsoft.FeatureManagement library ]]> https://www.daveabrock.com/2020/05/24/introducing-feature-management-copy/ 608c3e3df4327a003ba2fe31 Sat, 23 May 2020 19:00:00 -0500 A couple of weeks ago, I needed to find a way to better manage new functionality for a feature that was not ready yet (it broke stuff). We didn’t know exactly when the feature would be shipped, but we also didn’t want to deal with branch and merging headaches when the moment came.

My first thought: this was a perfect candidate for feature flags (or feature toggles). This allows you to merge code into a production branch, but only enable when ready. Instead of waiting for developers to be ready, you’re waiting for the end user to be ready.

Instead of writing this logic and having to deal with the corner case headaches, I knew Microsoft shipped a Microsoft.FeatureManagement library.

Unfortunately, that was all I knew and took this as a great opportunity to dig deeper. I learned a ton and will be dividing my new-found knowledge into four blog posts over the next several weeks.

I’ll be writing a few posts on this topic:

Create sample application and add NuGet packages

For this post, I’ll be using an ASP.NET Core Web Application with the traditional Model-View-Controller scaffolding. If you want to code along with me, make sure to create a new project. You can do this from the Visual Studio UI, or if VS Code is your style, the dotnet CLI by executing dotnet new mvc. Once you create your project, you’ll need to install Microsoft.FeatureManagement. You can do this in one of two ways:

  • From the NuGet Package Manager (easiest way is to right-click your solution in Visual Studio, then clicking Manage NuGet Packages)
  • From the dotnet CLI, you can execute the dotnet add package Microsoft.FeatureManagement command

(While we are using an ASP.NET Core application, the library depends on .NET Standard 2.0—so even non-.NET Core applications can benefit.)

Once the package is installed, update the dependency injection in Startup.cs to the following:

using Microsoft.FeatureManagement;
public void ConfigureServices(IServiceCollection services)
{
    services.AddControllersWithViews();
    services.AddFeatureManagement();
}

You’re now ready to get this party started!

A simple example

To get the gist of feature flags, we can just update the default message for the app that ships with the default ASP.NET Core scaffolding. Out of the box, the message says, Welcome. Let’s change it so that the message says Welcome to Feature Flags when the functionality is enabled. The Microsoft.FeatureManagement library will check your appsettings.json file for its configuration. By default, it will look for a FeatureManagement section. Let’s add an entry now.

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "FeatureManagement": {
    "WelcomeText": false
  },
  "AllowedHosts": "*"
}

In your Models folder, create an IndexViewModel.cs file and include the Message property.

public class IndexViewModel
{
    public string Message { get; set; }
}

Then, in Views/Home/Index.cshtml, update the file to this:

@model IndexViewModel
@{
    ViewData["Title"] = "Home Page";
}
<div class="text-center">
    <h1 class="display-4">@Model.Message</h1>
    <p>Learn about <a href="https://docs.microsoft.com/aspnet/core">building Web apps with ASP.NET Core</a>.</p>
</div>

Here, we are including the model we created, and rendering its Message property. We will soon introduce logic to populate the message with a certain value, whether feature flags are enabled in our application. Next, let’s open up HomeController.cs and implement the IFeatureManager service by injecting it into the constructor.

public class HomeController : Controller
{
    private readonly IFeatureManager _featureManager;
    public HomeController(IFeatureManager featureManager)
    {
        _featureManager = featureManager;
    }
    // stuff we'll include soon
}

The IFeatureManager interface looks like this:

public interface IFeatureManager
{
    IAsyncEnumerable<string> GetFeatureNamesAsync();
    Task<bool> IsEnabledAsync(string feature);
    Task<bool> IsEnabledAsync<TContext>(string feature, TContext context);
}

We’ll use IsEnabledAsync to check for the feature we specified in appsettings.json. We could pass the string directly, but a better way would be to add a class to store these strings. To do that, create a file in the root of the project called FeatureFlags.cs.

public static class FeatureFlags
{
    public const string WelcomeMessage = "Welcome Message";
}

There, I feel better. Back to HomeController.cs, change the Index action method to the following:

public async Task<IActionResult> Index()
{
    var indexViewModel = new IndexViewModel()
    {
        Message = await _featureManager.IsEnabledAsync("WelcomeMessage")
                  ? "Welcome to the feature flag"
                  : "Welcome"
    };
    return View(indexViewModel);
}

In the Index view action, we are calling IsEnabledAsync and passing in our flag’s name from the appsettings.json file. If that value is true the Message will display Welcome to the feature flag and if not, Welcome. The complexity you deal with every day, right? If you run your app, you just see the standard welcome message, because your feature flag is set to false in your configuration.

Before turning on flag

Now, if you go to your appsettings.json file and set WelcomeText to true and restart the page, you’ll see the feature flag in action! (You only have to restart the page, and not the app, as the configuration provider handles the change without an app restart!)

After turning on flag

When thinking in terms of deployment, you can set release variables and whatnot to better manage when to turn these on. We will discuss these when we talk about integrating with Azure.

]]>
<![CDATA[ The .NET Stacks #1: Microsoft Build, announcements galore! ]]> https://www.daveabrock.com/2020/05/24/dotnet-stacks-1/ 608c3e3df4327a003ba2fe30 Sat, 23 May 2020 19:00:00 -0500 This is an archive of my weekly (free!) newsletter, The .NET Stacks. Consider subscribing today to get this content right away! Subscribers don’t have to wait a week to receive the content.

Welcome to the very first issue of The .NET Stacks! I appreciate all of you for being my first subscribers—even before it was even launched. If you have any suggestions, you can simply reply to this email or hit me up on Twitter. And if you like what you see, I would love for you to share with others as well. Let’s get started!

3 things to keep you informed

The three things you need to know to keep you current and impress your colleagues (the details of which are in this newsletter):

  1. Blazor Web Assembly support is now live! This gives you the ability to debug C# in the browser and use one language for your entire development experience (while you may need to use some interop libraries). Need to work on the front-end, but don’t have a team with a good JavaScript skillset? Are you a one-person shop too overwhelmed to learn JavaScript? Well, there you go.
  2. With .NET 5, to be released in November 2020, comes the vision of “One .NET,” including Xamarin! A core tenet is producing one .NET runtime and framework that can be used everywhere.
  3. C# 9 is coming along, which brings a ton of improvements like init-only properties and records (among many, many other things)

Microsoft Build

As you may have heard, Microsoft Build occurred this week—from the comfort of our homes. The real question is: will we ever want to go back, after having this go so seamlessly?

All the content is available for on-demand viewing on the Build site and Channel 9. Here are the keynotes:

On top of the keynotes, there were several valuable break-out sessions that you can watch at your convenience. Of course, a ton of announcements/news:

Thought of the week

For many of us who have been writing .NET for awhile, we can easily split Microsoft into two companies: the Gates/Ballmer years (“OId Microsoft”), with its resistance to open source in favor of world dominance and Microsoft lock-in; and two, the kindler/gentler Microsoft who strives to be open to all, in favor of driving business to its Azure offerings (“New Microsoft”).

When New Microsoft announced the Windows Package Manager, it was met with quite a bit of skepticism from the community, and caused one spirited GitHub discussion. Leaders in the space, like Chocolatey, don’t seem to be worried yet, but some folks in the community are asking: why is this needed, with so many community projects out there that already solve this problem (or are further along)? Microsoft points at the issues delivering a native app as well as security considerations.

Microsoft’s stance of “if you are happy with what you use, great” comes off to some as a little naive, considering how the remnants of Old Microsoft still linger to some folks. As for the community, some patience would help without jumping to conclusions, as Microsoft has open-sourced this and is open to feedback—and New Microsoft has a track record of working with developers, not against them.

From the community

This section is a bit brief because of all the Build announcements, but alas:

What are you building?

Care to show off what you’re building? Reach out at hey@dotnetstacks.com, and it might be featured in a future newsletter.

Tweet of the week

tweet of the week

Feedback?

I would love to hear any feedback you have for The .NET Stacks! For a little while, I’ll be trying some new things—as with software, some might work and some might not. My goal is to make this the one-stop shop for weekly updates on developing in the .NET ecosystem, so I look forward to any feedback you can provide. You can directly reply to this email, or talk to me on Twitter as well. I’ll talk to you next week!

]]>
<![CDATA[ Introducing The .NET Stacks weekly newsletter ]]> https://www.daveabrock.com/2020/05/21/introducing-dotnetstacks/ 608c3e3df4327a003ba2fe2f Wed, 20 May 2020 19:00:00 -0500 Every Friday or so, I typically round up the week’s best links. I wanted to expand on this further, and have a weekly digest of news and perspectives on the .NET world. I’ve always enjoyed the format of newsletters and have found many of them to be lacking. I think the community can do better.

I’ve launched a weekly newsletter called The .NET Stacks. It’s a round up of the last week in .NET, but also delivered with perspective and analysis.

If you’re interested, feel free to register below. I hope you enjoy it!

]]>
<![CDATA[ First Look: C# Source Generators ]]> https://www.daveabrock.com/2020/05/08/first-look-c-sharp-generators/ 608c3e3df4327a003ba2fe2e Thu, 07 May 2020 19:00:00 -0500 Last week, Microsoft introduced a preview of C# Source Generators, to be released with the C# 9 release. While the tooling isn’t great (yet), it’s available for curious developers to play with—so long as you are on the latest version of Visual Studio 2019 preview and the latest .NET 5 preview, too. If you’ve been geeking out on C# for a while, you may remember this was proposed as early as C# 6.

Heads up! as you can imagine, this is in preview so this content is definitely subject to change. Be aware, especially with feedback from the community, that the samples aren’t always working 100%.

This post contains the following content.

Source generators overview

From the Microsoft blog post, they define source generators as “a piece of code that runs during compilation and can inspect your program to produce additional files that are compiled together with the rest of your code.” It’s a compilation step that generates code for you based on your existing code. The benefit here, straight from Microsoft’s design document, is that source generators can read the contents of the compilation before runtime and access any additional files—meaning C# can now read both C# code and files specific to source generation.

So, in a step-by-step process, your application:

  1. Kicks off compilation
  2. Runs the source generator compilation step, analyzing your code and then generating code added as compilation input
  3. Completes the source generator step and continues and eventually completes the compilation process

If you’ve ever leaned on reflection in your projects, you might begin to see many use cases for these solutions—C# source generators provide a lot of advantages that reflection currently offers and few, if any, drawbacks. Reflection is extremely powerful when you want to query properties and attributes you don’t know about when you typically compile. Of course, getting type information at runtime can incur a large performance cost, so offloading this to compilation is definitely a game-changer in C#.

How else can this help you? Microsoft is developing a source generators cookbook that walks through a bunch of real-world scenarios, such as class generation, file transformation, interface implementation, and more.

It is important to note that source generators are additive only, meaning generators add new code to your compilation but do not modify existing user code. You can reference the design document for details.

The ISourceGenerator interface

Source generators implement the ISourceGenerator interface, in the Microsoft.CodeAnalysis namespace. It looks like this:

public interface ISourceGenerator
{
    void Initialize(InitializationContext context);
    void Execute(SourceGeneratorContext context);
}

The Initialize method is called once by the host (in this case, the IDE or compiler). The passed in InitializationContext registers callbacks for future generation calls. Many times, you probably won’t mess with this. The Execute method, meanwhile, is where the magic happens: the passed-in SourceGeneratorContext provides access to the current compilation using the following properties.

public readonly struct SourceGeneratorContext
    {
        public ImmutableArray<AdditionalText> AdditionalFiles { get; }

        public CancellationToken CancellationToken { get; }

        public Compilation Compilation { get; }

        public ISyntaxReceiver? SyntaxReceiver { get; }

        public void ReportDiagnostic(Diagnostic diagnostic) { throw new NotImplementedException(); }

        public void AddSource(string fileNameHint, SourceText sourceText) { throw new NotImplementedException(); }
    }

OK, enough explanation—let’s try it out!

Try it out

In this post, we will get our feet wet by trying out:

  • A simple, “hello world” example
  • Implementing the INotifyPropertyChanged pattern

Your first source generator

After you confirm you’re on the latest version of Visual Studio 2019 preview and the latest .NET 5 preview, crack open Visual Studio 2019 preview and create a new .NET Standard 2.0 class library. Call it something like MyFirstGenerator.

So, tooling isn’t great. For now, you’ll need to edit the project file to the following structure:

<Project Sdk="Microsoft.NET.Sdk">
    <PropertyGroup>
        <TargetFramework>netstandard2.0</TargetFramework>
        <LangVersion>preview</LangVersion>
    </PropertyGroup>
    <PropertyGroup>
        <RestoreAdditionalProjectSources>https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet5/nuget/v3/index.json ;$(RestoreAdditionalProjectSources)</RestoreAdditionalProjectSources>
    </PropertyGroup>
    <ItemGroup>
        <PackageReference Include="Microsoft.CodeAnalysis.CSharp.Workspaces" Version="3.6.0-3.20207.2" PrivateAssets="all" />
        <PackageReference Include="Microsoft.CodeAnalysis.Analyzers" Version="3.0.0-beta2.final" PrivateAssets="all" />
    </ItemGroup>
</Project>

In your new class, add a Generator annotation to your class. If you have Visual Studio implement the interface for you, it’ll now look like this.

using System;
using Microsoft.CodeAnalysis;

namespace MyFirstGenerator
{
    [Generator]
    public class Generator : ISourceGenerator
    {
        public void Initialize(InitializationContext context)
        {
            throw new NotImplementedException();
        }

        public void Execute(SourceGeneratorContext context)
        {
            throw new NotImplementedException();
        }
    }
}

Now, we can generate a silly class with some silly properties in our Execute implementation. We’ll leave Initialize alone.

public void Execute(SourceGeneratorContext context)
{
    var sourceText = SourceText.From(@"
        namespace GeneratedClass
        {
            public class SeattleCompanies
            {
                public string ForTheCloud => ""Microsoft"";
                public string ForTheTwoDayShipping => ""Amazon"";
                public string ForTheExpenses => ""Concur"";
            }
        }", Encoding.UTF8);
    context.AddSource("SeattleCompanies.cs", sourceText);
}

Now, build it — with any luck the generated class should be added to your compilation.

To test this out, create a new console project (let’s call it MyFirstGeneratorTest or something similar). Then, add a reference to the source project we just created (MyFirstGenerator). To get anything to run, you need to manually edit the project file to include preview as a LangVersion and OutputItemType and ReferenceOutputAssembly to your ProjectReference. It should look like this:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
	<TargetFramework>netstandard2.0</TargetFramework>
	<LangVersion>preview</LangVersion>
  </PropertyGroup>
  <ItemGroup>
    <ProjectReference Include="..\MyFirstSourceGenerator\MyFirstSourceGenerator.csproj"                OutputItemType="Analyzer"                         ReferenceOutputAssembly="false" />
  </ItemGroup>
</Project>

In your Program file, if you add your generated class to the using statement, you should be able to have IntelliSense pick up your generated class! Here’s my code:

using System;
using GeneratedClass;

namespace MyFirstSourceGeneratorConsole
{
    class Program
    {
        static void Main(string[] args)
        {
            var companies = new SeattleCompanies();
            Console.WriteLine("Running code from a generated class!");
            Console.WriteLine($"My favorite cloud: {companies.ForTheCloud}");
            Console.WriteLine("We're done here.");
            Console.ReadLine();
        }
    }
}

Note: In many cases, as of now you may need to restart Visual Studio to get IntelliSense support.

Implement the INotifyPropertyChangedPattern

In the C# community, a common request is the ability to automatically implement interfaces for classes with an attribute attached to them. Source generators will make this possible, with the use of the INotifyPropertyChanged pattern. Let’s look at a basic use of this.

Let’s say you have a class with some properties you need to monitor. You can decorate it with a source generator implementation of this, like so:

using NotifyMe;

public partial class MyClass
{
    [NotifyMe]
    private bool _boolProp;

    [NotifyMe]
    private string _stringProp;
}

Then, the generator can give us this code:

using System;
using System.ComponentModel;

namespace NotifyMe
{
    [AttributeUsage(AttributeTargets.Field, Inherited = false, AllowMultiple = false)]
    sealed class AutoNotifyAttribute : Attribute
    {
        public AutoNotifyAttribute()
        {
        }
        public string PropertyName { get; set; }
    }
}

public partial class MyClass : INotifyPropertyChanged
{
    public bool BoolProp
    {
        get => _boolProp;
        set
        {
            _boolProp = value;
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs("UserBool"));
        }
    }

    public string StringProp
    {
        get => _stringProp;
        set
        {
            _stringProp = value;
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs("UserString"));
        }
    }

    public event PropertyChangedEventHandler PropertyChanged;
}

For a deeper dive, the Roslyn team has a good example of this in their GitHub repository.

Wrapping up

This is really early, but I hope you can see the obvious benefits C# source generators can offer you. To be honest: the developer experience is not refined yet, but once that happens you’ll really see the power they provide.

References

]]>
<![CDATA[ Compress Images in GitHub Using Imgbot ]]> https://www.daveabrock.com/2020/04/23/compress-images-in-github-using-imgbot/ 608c3e3df4327a003ba2fe2d Wed, 22 Apr 2020 19:00:00 -0500 We’ve all been there: we’re reading an article online and it takes forever to load. To investigate—because we’re developers, after all—we open our developer tools, browse to the Network tab, and notice that the images are uncompressed. Oof. Of course, we would never do that, right?

For the uninitiated, image compression involves shrinking (or “compressing”) the size of a graphics file without degrading a file significantly. This reduces bandwidth and allow sites to load faster. When compressing images, you can choose between:

  • Lossless compression, which maintains the same quality as a file before it was compressed (popular with PNG and BMP files)
  • Lossy compression, which takes a more aggressive approach (resulting in a smaller size, lost data, and decreased quality). If you’re like me, running a programming blog with simple screenshots, the difference is hard for the naked eye to see. If you’re editing high-quality images on a photography site, however, it’s definitely something to consider.

Image compression is not simple, so unless you specialize in this, you should definitely take advantage of the bevy of tools out there in the community. These tools can come from command-line interfaces (CLIs), editor extensions (I really like the Docs Authoring Pack from Microsoft), or even web sites like TinyPNG (this manual solution does not scale well).

Another approach, of course, would be automating it after-the-fact with a bot. While this gives you less control, you can take a “set it and forget it” mindset and not have to worry about it at all. That’s where I discovered Imgbot, a GitHub bot you can download that scans your GitHub repository for images, compresses them, and then submits a pull request for you to merge into your codebase. It delivers lossless compression by default and is configurable, as well.

Let’s get started with Imgbot and how it can help you manage your repository’s image files.

Download Imgbot

To get started, you’ll need to head over to the Imgbot site to download the bot to your GitHub account. After you pick a plan (free for public open-source repos!) you’ll need GitHub to give Imgbot access to your single repository (or all of them). Once that is installed, Imgbot will create a branch for you and publish a pull request that you can view and then merge, if it’s to your liking.

For me, I was able to save about 18% (some were already compressed):

Imgbot compression results

Opt-in to configuration options

If you take a look at the documentation, you can include configuration options for more control. This is done by dropping an .imgbotconfig file at the root of your repository.

You can configure a schedule, files to ignore, “aggressive” compression, wiki compression, and a minimum KB threshold. This sample configuration comes straight from the docs:

{
    "schedule": "daily",
    "ignoredFiles": [
        "*.jpg",
        "image1.png",
        "public/special_images/*",
    ],
    "aggressiveCompression": "true"
    "compressWiki": "true"
}

If you are looking for a hands-off solution to your needs, and are using GitHub, definitely give this solution a shot.

]]>
<![CDATA[ Tweeting New GitHub Pages Posts from GitHub Actions ]]> https://www.daveabrock.com/2020/04/19/posting-to-twitter-from-gh-actions/ 608c3e3df4327a003ba2fe2c Sat, 18 Apr 2020 19:00:00 -0500 For the last few years, I hosted my blog on the Ghost platform. It was a fast, Node-powered CMS, which allowed for stupid-simple publishing: I could get in, write, and get out. However, I was looking at another annual bill for $220 and I wanted to find a better (and cheaper) way. I knew of the myriad of static site generators out there today. I eventually landed on Jekyll + GitHub Pages. A month in, I’m happy that GitHub Pages gives me the flexibility to customize as I wanted, but also the simplicity. I can push a markdown file to GitHub, and then deploy to daveabrock.com automatically. All for just the cost of my domain name ($9 a year)!

After I publish my posts, I typically post them to Twitter. Ghost had a setting to do this. Could I do this from GitHub Pages? Not easily, it seemed, without leveraging external services. I could maybe toy with an RSS feed trigger from IFTTT or build something with Azure Logic Apps. Of course, it’s a silly thing to obsess over but why should I manually do something repeatedly?

A thought occurred to me: if I’m using GitHub, shouldn’t I be able to use their pipeline? I knew that I could leverage GitHub Actions in my repo, which the docs say allow me to “discover, create, and share actions to perform any job you’d like, including CI/CD, and combine actions in a completely customized workflow.” I would love to make this a deployment step.

Alas, being forced to manually tweet about a new post a few times a month wasn’t one of my life’s biggest regrets—but if I could spend a few hours learning about GitHub Actions while saving a few minutes every month, why not?

This post is about learning. Sure, you could look at my 16-line YAML file say, “looks pretty simple” and maybe it is, in retrospect. But when you try something new, you stumble and you learn—and that’s what this post is about.

This post contains the following content.

The perfect world, and what I can do now

Without knowing much about GitHub Actions, here’s the workflow I envisioned:

  1. Every time I write a post, push my markdown file (and associated files, like images) to my repository.
  2. Run a GitHub Actions workflow step from that push
  3. In that step, get the post title and path to the post
  4. Send a tweet with this information

As you can imagine, it was the third step that took some thought. In a perfect world, here’s how I would grab the post and URL information:

  1. Somehow, grab the Git commit data and parse the committed files
  2. Assuming I’ll only be pushing one markdown file, get the title metadata which is sitting inside the file
  3. Get path, which more or less is the filename (would just have the replace the .markdown extension with .html)
  4. Pass that string to the Send Tweet Action

Did I mention I didn’t want to write any additional code or scripts, or spend more than a few hours on this? It was time to reset my expectations and think a little smaller. Thanks to some input from my friend Isaac Levin from Microsofton Twitter, naturally—I decided to just add the title and URL to my Git commit message. Then, I could just pass down the commit message to the Twitter action. This was manageable.

Let’s get started.

What you need before you get started

If you’re playing along at home, you need to do the following before proceeding with this post:

  • Register for a Twitter developer account. It wouldn’t be a terrible idea to set up a test account first to avoid spamming your followers
  • Once the registration is approved, head over to developer.twitter.com/apps and create a Twitter application. You’ll need to do this so the Send Tweet Action can call the Twitter API on your behalf
  • A GitHub repository (this demo works for any push, but the obvious use case is a GitHub Pages site)

Add Twitter API secrets to GitHub

We will be using the Send Tweet Action, developed by GitHub’s Edward Thomson. This action handles connecting to Twitter API to send the tweet. We need to provide it the Twitter consumer API key, Twitter consumer API secret, Twitter access token, and Twitter access token secret.

To retrieve those, go to developer.twitter.com/apps, click Details next to the application you created, then the Keys and tokens link.

You need to add those to GitHub as encrypted secrets. From GitHub, go to Settings > Secrets. I recommend calling them TWITTER_CONSUMER_API_KEY, TWITTER_CONSUMER_API_SECRET, TWITTER_ACCESS_TOKEN, and TWITTER_ACCESS_TOKEN_SECRET, so you can easily follow the code in this post. For details on storing encrypted secrets in GitHub, read this GitHub article.

Create your first GitHub Action

To get started, you’ll need to set up a GitHub Action. To do that, click the Actions link at the top of your repository, right next to Pull Requests. From there, you’ll see the power of GitHub Actions—there are so many CI workflows and automation processes to choose!

Actions options

For us, though, we’ll use Simple Workflow. In that pane, click Set up this workflow. Once you do that, you will see that you are now editing a blank.yml file (which you can rename), sitting in a .github/workflows directory. We’ll be updating this file.

Let’s send a tweet! Replace the contents in blank.yml with this, right in GitHub, and commit the file. (We will explain particulars in a second.)

name: Send a Tweet
on: [push]
jobs:
  tweet:
    runs-on: ubuntu-latest
    steps:
      - uses: ethomson/send-tweet-action@v1
        with:
          status: "NEW POST!"
          consumer-key: ${% raw %}{{ secrets.TWITTER_CONSUMER_API_KEY }}{% endraw %}
          consumer-secret: ${% raw %}{{ secrets.TWITTER_CONSUMER_API_SECRET }}{% endraw %}
          access-token: ${% raw %}{{ secrets.TWITTER_ACCESS_TOKEN }}{% endraw %}
          access-token-secret: ${% raw %}{{ secrets.TWITTER_ACCESS_TOKEN_SECRET }}{% endraw %}

After you commit the file, you can head over to the Actions page in GitHub to monitor the status.

Job results

With any luck, you should see a tweet that says “NEW POST!” Let’s break down what’s happening. (You can also review the docs for details on GitHub Actions workflow syntax.)

  • name - the name of your workflow, which is displayed on the actions page
  • on - the event that triggers the workflow. I included push but you can also do pull_request or other options
  • jobs - a workflow comprises one or more jobs
  • runs-on - the type of machine to run the job on (required)
  • steps - one of a sequence of tasks in a job
  • uses - selects an action to run. In our case, we are using the Send Tweet Action from the GitHub Marketplace
  • with- the input that the action requires. For this action, the status is the tweet text, and the rest of the fields reference the secrets you created earlier.

So, this is great! However, your followers aren’t mind readers. Tweeting “NEW POST!” doesn’t do much. We now have to figure out how to tweet the commit message.

How to get just the commit message from Git

To get the latest commit from Git, I know I can use git log 1. Here’s what I get back:

commit cc789c6bd3032e0d28111b2515f3931cb5b95e4b (HEAD -> master)
Merge: e5b2597 a07f6b5
Author: Dave
Date:   Sun Apr 19 08:05:02 2020 -0500

Merge branch 'master' of https://github.com/daveabrock/daveabrock.github.io

I already see two problems: (1) this is a merge commit message, and (2) I just want the message.

How about git log 1 --no-merges?

commit cc789c6bd3032e0d28111b2515f3931cb5b95e4b (HEAD -> master)
Author: Dave
Date:   Sun Apr 19 08:05:02 2020 -0500

my commit

Better! But I just want to see my commit. Luckily, Git has a --pretty flag. I can pass --pretty=%B, where %B is the raw body of the commit message. So now, let’s pass git log --no-merges -1 --pretty=%B. And what do we get?

my commit

Victory! (Or you could have googled it, but what fun is that?)

Passing commit message to my GitHub action

So now, I know how to do things: how to tweet when I push and how to get my commit message. How do we connect them? From my blank.yml file, I want a way to run git log and save the output, then pass the output to the action’s status field.

From what I could see, the checkout action allows me to fetch commit details. So, I can do something like this:

name: Send a Tweet
on: [push]
jobs:
  tweet:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v1
      - id: log
        run: echo "$(git log --no-merges -1 --pretty=%B)"
    # rest of file removed for brevity

If I look at the output from the Actions page, everything looks great. Now, how can I store this in a variable? Well, I was initially reading about a steps.logs.outputs object. If, for example, I changed my run command to echo "::set-output name=message::$(git log --no-merges -1 --pretty=%B)" I could access the value from using steps.log.outputs.message. Great, let’s try it! Here’s the full file. What do you think will happen?

name: Send a Tweet
on: [push]
jobs:
  tweet:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v1
      - id: log
        run: echo "::set-output name=message::$(git log --no-merges -1 --pretty=%B)"
      - uses: ethomson/send-tweet-action@v1
        with:
          status: "NEW POST: ${% raw %}{{ steps.log.output.message }}{% endraw %}"
          consumer-key: ${% raw %}{{ secrets.TWITTER_CONSUMER_API_KEY }}{% endraw %}
          consumer-secret: ${% raw %}{{ secrets.TWITTER_CONSUMER_API_SECRET }}{% endraw %}
          access-token: ${% raw %}{{ secrets.TWITTER_ACCESS_TOKEN }}{% endraw %}
          access-token-secret: ${% raw %}{{ secrets.TWITTER_ACCESS_TOKEN_SECRET }}{% endraw %}

This results in a NEW POST: tweet, sadly, with no commit data. This does not persist across actions or steps (in retrospect, the reason for it being in steps). Luckily, GitHub Actions has environment variables, which come to our rescue.

While I was on the right track with my run statement, I should have used set-env instead. This injects the value into an env object that GitHub Actions uses for environmental variables. So: if I say set-env name=POST_COMMIT_MESSAGE I could access it later using ${% raw %}{{ env.POST_COMMIT_MESSAGE }}{% endraw %}. Here is the updated file.

name: Send a Tweet
on: [push]
jobs:
  tweet:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v1
      - id: log
        run: echo "::set-env name=POST_COMMIT_MESSAGE::$(git log --no-merges -1 --pretty=%B)"
      - uses: ethomson/send-tweet-action@v1
        with:
          status: "NEW POST: ${% raw %}{{ steps.log.output.message }}{% endraw %}"
          consumer-key: ${% raw %}{{ secrets.TWITTER_CONSUMER_API_KEY }}{% endraw %}
          consumer-secret: ${% raw %}{{ secrets.TWITTER_CONSUMER_API_SECRET }}{% endraw %}
          access-token: ${% raw %}{{ secrets.TWITTER_ACCESS_TOKEN }}{% endraw %}
          access-token-secret: ${% raw %}{{ secrets.TWITTER_ACCESS_TOKEN_SECRET }}{% endraw %}

It works! So now, in the future, if I push a commit message to master in the format <Post title> <url> it will push to Twitter right away. (I can now customize based on PR or other policies, as well.)

Wrapping up

I hope you enjoyed reading this post as much as I did writing it. This was my first foray into this, so if you know of a better and simpler way to do this, leave a comment below (or on Twitter, obviously).

]]>
<![CDATA[ What I'm Reading (Week of 4/14/20) ]]> https://www.daveabrock.com/2020/04/17/what-i-am-reading/ 608c3e3df4327a003ba2fe2b Thu, 16 Apr 2020 19:00:00 -0500 Here’s a weekly Friday shout-out of articles, videos, and other content I found interesting.

Development and testing

YouTube

Developer productivity

General technology topics

]]>
<![CDATA[ C# 8, A Year Late ]]> https://www.daveabrock.com/2020/03/29/csharp-8-year-late/ 608c3e3df4327a003ba2fe2a Sat, 28 Mar 2020 19:00:00 -0500 After working in C# for the better part of a decade, last year I went on a hiatus from the .NET ecosystem and dove in head-first on Node.js, TypeScript, and React. Now, I’m back! Since I’ve been writing C# code again and shaking off the rust, Visual Studio 2019 (and more specifically, the Roslyn analyzers) keep reminding me: “Fun fact: did you know you can do this new thing?

After repeating this exercise approximately 26 times, I decided to do some actual research and see what’s changed with C# 8—I hope this helps you, too.

This article, while on the long side (sorry!), just mentions a few of my favorite improvements to the language. If you want to geek out on everything, the What’s New in C# 8.0 article is a great place to start.

Worth mentioning: According to the article in question, C# 8.0 is supported on .NET Core 3.x and .NET Standard 2.1 (see the documentation on language versioning).

Also, a big thank you goes out to Microsoft’s David Pine for his valuable feedback on this post.

This post contains the following content.

Simplified using declarations

When you use a using declaration, you are telling the C# compiler that the current variable should automatically be disposed at the end of the variable’s scope. Remember how your grandparents did this, way back in C# 7?

// using System.IO;

static void DoStuffWithAFile(string doingSomeStuff)
{
    using (var file = new StreamWriter("MyFile.txt"))
    {
        // three for-eaches
        // five logging statements
        // nine if statements
    }
}

Now, you can accomplish this with a simplified using var statement and avoid being caught up in bracket hell. It doesn’t seem like a huge deal, but definitely adds to code quality as it makes the code more readable and maintainable.

static void DoStuffWithAFile(string doingSomeStuff)
{
    using var file = new StreamWriter("MyFile.txt");
    // three for-eaches
    // nine if statements
}

Enhanced pattern matching

C# 8 introduces a lot of new pattern matching functionality, which helps provide a line of demarcation between data and functionality. My favorite improvements include a refined switch expression and simplified property patterns.

Switch statements are now switch expressions

With your regular switch expression, you have to go through a lot of motions with typing out each case, some breaks (if needed), and a default value.

Let’s take a look at an admittedly simple switch statement we’ve all written more than we’d like to admit.

public static string FindAProgrammingLanguage(string languageInput)
{
    string languagePhrase;

    switch (languageInput)
    {
        case "C#":
            languagePhrase = "C# is fun!";
            break;
        case "JavaScript":
            languagePhrase = "JavaScript is mostly fun!";
            break;
        case "TypeScript":
            languagePhrase = "TypeScript makes JavaScript more fun!";
            break;
        case "C++":
            languagePhrase = "C++ has pointers!";
            break;
        default:
             throw new Exception("You code in something else I don't recognize.");
    };
    return languagePhrase;
}

With syntax improvements in C# 8, we can:

  • Replace the case and : elements with a more concise =>
  • Replace the default statement with _.
  • Turn the concept of a switch statement into a switch expression

Look at us now!

public static string FindAProgrammingLanguage(string languageInput)
{
    string languagePhrase = languageInput switch
    {
        "C#" => "C# is fun!",
        "JavaScript" => "JavaScript is mostly fun!",
        "TypeScript" => "TypeScript makes JavaScript more fun!",
        "C++" => "C++ has pointers!",
         _ => throw new Exception("You code in something else I don't recognize."),
    };
    return languagePhrase;
}

Property patterns

Let’s use this switch expression to look at property patterns, which allow you to work with properties dependent on constant, predictable values.

If we wanted to match if programmer.PreferredLanguage equals C#, we could do it like this:

switch (programmer)
{
    case { PreferredLanguage: "C#" }:
        // do something
}

You can also use this on other pattern matching keywords introduced in earlier C# versions, like is.

Default interface methods

So, this is exciting: you can now define an implementation when you declare a member of an interface. A common scenario, coming straight from Microsoft, is if you want to make enhancements to your interfaces without breaking consumers of the existing implementation. Before C# 8, you couldn’t add members to an interface without breaking the classes that implement it.

Take a look at this article for a brief example to get you started.

Nullable reference types and null-coalescing

Who doesn’t love working with null types? (Me, for one.) To express your intent to the compiler, you can do a few new things.

Nullable reference types

string? firstName;
string lastName;

The firstName variable is declared as a nullable reference type. We are used to doing this to nullable value types (which you’ve been doing since C# 2, by the way, when generics were the cat’s meow). Because the ? is not appended in the lastName variable, the compiler sees it as a non-nullable reference type.

According to the Microsoft documentation, the compiler uses static analysis to determine if a nullable reference is non-null. To provide some, well, flexibility, you can use a null-forgiving operator (!) following a variable name, like lastName!.Length.

The next time I introduce an unintended null reference to the codebase, I’ll just tell my colleagues my app wasn’t very null-forgiving. Life is all about marketing, after all.

Null-coalescing

This is quick, but also a quick win. If you wanted to conditionally assign a value to a variable whether it is null, you would conditionally check for null or maybe use a ternary operator. Now, it’s far easier. Just do this with the ??= operator:

List<string> favoriteSongs = null;
favoriteSongs ??= new List<string>();

The operator has two question marks for the double-take you will do when you first see the operator, but you’ll learn to love it.

Async streams

Async improvements are not new to the language, as the async/await pattern was introduced way back in C# 5. With C# 8, though, we now have async streams, which allow you to write an async method to return multiple values.

With async streams, they:

  • Are declared with an async keyword modifier
  • Return an IAsyncEnumerable
  • Contain yield statements to return the stream elements

You can see the benefits in cycling through a traditional for loop. Check out the Microsoft Docs article for a full demonstration.

Indices and ranges

As a part of syntactic sugar—a $500 word that only means expressing syntax more clearly—accessing indices and ranges is a lot easier and should require less brain cells to determine which index you are at, for example.

C# 8 introduces:

  • Two new types: System.Index and System.Range, which signify an index and range of a sequence, respectively
  • An index from end (^) operator, and a range operator (...)

As an example, let’s hypothetically say the world is forced to endure a pandemic and a programmer decides to catch up on all James Bond movies. Again, this is hypothetical, but you get the point.

Here’s my array of movies:

var movies = new string[]
{
    "Dr. No",
    "From Russia with Love", // best one, obviously
    "Goldfinger",
    "Thunderball",
    "You Only Live Twice",
    //...
    "Die Another Day",
    "Casino Royale",
    "Quantum of Solace",
    "Skyfall",
    "Spectre" // eek
};

Indexes

You should know how indexes work, hopefully—as arrays are zero-based, Dr. No would be 0, From Russia with Love would be 1, and all the way up to Spectre having an index of 9. What C# introduces now, though, is the ^ index from end operator, which works as the complete opposite.

For example, the first element would have an index from end value as 10, the next element 9, and so on.

var movies = new string[]
{
    "Dr. No", // ^10
    "From Russia with Love", // ^9
    "Goldfinger", // ^8
    "Thunderball", // ^7
    "You Only Live Twice", // ^6
    //...
    "Die Another Day", // ^5
    "Casino Royale", // ^4
    "Quantum of Solace", // ^3
    "Skyfall", // ^2
    "Spectre" // ^1
    // words.length value is ^0
};

Now, you don’t have to do counting from an index and makes things a lot easier. For example, to get Skyfall (please use string interpolation to prove you aren’t a monster):

Console.WriteLine($"The second to last Bond movie is {movies[^2]}");

See? You’ve already forgotten about Spectre.

Ranges

I don’t know about you, but 90% of my time getting a substring involves me saying to myself (or anyone who will listen): “Is the end inclusive? Is it exclusive? Why do I always have to Google this?”

With ranges in C# 8, remember this: the start of the range is included, and the end of the range isn’t. With that said, let’s grab Connery’s first three Bond films.

// Returns first three movies (index 0, 1, 2 => Dr. No, From Russia, Goldfinger)
var conneryMovies = movies[0..3];
Console.WriteLine($"The first three Connery movies are: {string.Join(", ", conneryMovies)}");

You can use the ^ index from end operator, as discussed previously, to grab my favorite Daniel Craig films (Skyfall, Quantum of Solace (work with me here), and Casino Royale).

var craigMovies = movies[^4..^1];
Console.WriteLine($"My favorite Craig movies are: {string.Join(", ", craigMovies)}");

You can also use the Range struct, which can then be used inside [ and ] to give you that array-like kind of feeling.

Indices and ranges are not just for strings—according to the documentation, you can also use them with Span<T> and ReadOnlySpan<T>.

Staying current with C# developments

Did you know C# is open source and the design of the language has its own GitHub repo? There, you can find active feature proposals, notes from design meetings, and the language version history. Community involvement is definitely encouraged.

The .NET team recently had an All Things C# stream, where some key designers of the language discussed what’s coming—I’ll be writing about some of these things shortly, but until then the video is definitely worth your time.

]]>
<![CDATA[ How to rename a Git branch ]]> https://www.daveabrock.com/2019/09/30/how-to-rename-a-git-branch/ 608c3e3df4327a003ba2fe28 Sun, 29 Sep 2019 19:00:00 -0500 As a developer for more than a decade, sometimes I wonder how it’s taken me this long to do a thing. This morning, it was renaming a Git branch. Whatever the case, I finally found the need to do it this morning and am happy to share how easy it is.

Rename locally

If you aren’t already at the branch in question, check it out:

git checkout <my-branch-i-want-to-rename>

Now that your branch is checked out, rename it by using the git branch command:

git branch -m <new-name-of-my-branch>

Push to remote, if needed

If you have pushed the old branch, you’ll need to clean it up in the remote branch as well. To do this, delete the old branch and then push your newly-named branch to your repository:

git push origin --delete <my-branch-i-want-to-rename>
git push origin -u <new-name-of-my-branch

Mission accomplished! Take the rest of the day off—you deserve it.

]]>
<![CDATA[ Keep Those Boolean Conditionals Simple ]]> https://www.daveabrock.com/2019/02/07/keep-those-boolean-conditionals-simple/ 608c3e3df4327a003ba2fe27 Wed, 06 Feb 2019 18:00:00 -0600 When working on legacy projects (or even new ones!) you are bound to come across code that is interesting—and trust me, your colleagues will say the same about your code from time to time!

There’s always a balance between terseness and readability, and sometimes you can go overboard. Here’s one I see sometimes that I don’t particularly enjoy.

// C# syntax
public bool CheckABoolean(int a)
{
    if (a > 42)
        return true;
    else
        return false;
}

Instead, do this. Your eyes and your colleagues will thank you.

// C# syntax
public bool CheckABoolean(int a)
{
    return a > 42;
}
]]>
<![CDATA[ How 2018 Went ]]> https://www.daveabrock.com/2018/12/28/how-2018-went/ 608c3e3df4327a003ba2fe26 Thu, 27 Dec 2018 18:00:00 -0600 I’m not a fan of resolutions—how very waterfall-y. In this fail-fast agile culture we live in now, that’s a long time to wait to fix things. But since it is the end of the year—a great time for downtime and reflection—it’s only natural to take a look back at how the last 365 days went.

Here’s why I especially like this exercise: for a lot of us, it’s so easy to be self-critical. It’s very important to learn and grow from your mistakes but equally important to understand and reflect on your accomplishments.

A mission to become active in the community

In 2018, I made it my mission to become active in the community.

Why? Here are my reasons:

  • Learn by doing: I learn much more when releasing something out into the world
  • Meeting great people: You can reach out to others, make friends, and get out of your bubble by drawing on other’s experiences
  • Help others: The developer community has helped me throughout my career and I love being able to do the same
  • Your job: You and your employer benefit greatly from all your work

While I previously enjoyed engaging with the community—a talk here, a post there—I largely was a consumer. At this point in my career, I wanted to give back and help others.

I’m proud to say I reached thousands of developers in 2018 with conference talks, blog posts, and open-source community contributions.

Blogging contributions

Did you know before I went into software development, I was a technical writer? True story. In 2018, I really enjoyed mixing my writing experience with technical expertise to help developers across the globe.

I initially started blogging so I could fill knowledge gaps while preparing for conference talks. As a result, I was pleasantly surprised to learn that my posts reached more people than I would have ever imagined. This reinforced one of my core beliefs: if you learn something, never keep it to yourself. Share it with your friend, your team, or—even better—the world.

Here’s a list of my most popular posts this year, according to Google Analytics.

Speaking events

Much like a lot of people, public speaking used to frighten me. I could always speak casually at work, but when I was in front of people everything changed. It was so difficult for me I started to worry about it harming my career.

I tried a casual lunch session at work in front of friends and co-workers. It wasn’t pretty at first, but I was happy to conquer my fears head-on. As I kept speaking and my confidence grew I also came to a realization: I actually like doing this.

For example, at Twin Cities Code Camp in April I spoke in front of about 100 people—my first time in a group this large. It was a great feeling to not be completely mortified of public speaking anymore.

I’m not a great speaker by any means, but I’m getting better after every talk I give. It’s time-consuming, exhausting, and sometimes scary. But when someone comes up to you after a talk and thanks you for teaching them something new, it’s all worth it.

In 2018, I spoke at these events:

  • South Florida Code Camp
  • Twin Cities Code Camp
  • Central Wisconsin IT Conference
  • MADdotNET (.NET user group in Madison, WI)
  • Milwaukee Code Camp
  • Chicago C# Developers Group

Open-source contributions

As a developer in the .NET space, it’s great to see how Microsoft has transformed into an open and inclusive culture. For example, if you see an issue or content gap with developer documentation you are empowered to suggest a fix or even do it yourself.

I made a lot of contributions to the ASP.NET Core documentation this year—and in total, I had 38 pull requests merged and live. Obviously, some of them were cosmetic (I’m not immune to a “typo fix” commit) but was happy to be a big community contributor to the documentation this year.

The entire ASP.NET Core Docs team deserves a lot of praise for living open source (I’ve personally worked with Rick Anderson and Scott Addie the most on that team). It isn’t easy being out in the open, but they live it and are truly community-driven.

Looking ahead

It was great being a part of the community this year, and hope to make 2019 even more impactful. If you have any feedback for me—including talk ideas or general suggestions—let me know!

]]>
<![CDATA[ Level Up Your GitHub Experience with Chrome Extensions ]]> https://www.daveabrock.com/2018/11/26/level-up-github-experience-with-chrome-extensions/ 608c3e3df4327a003ba2fe25 Sun, 25 Nov 2018 18:00:00 -0600 If you do any work in open source, you probably live for GitHub. With all the time you spend using it, you can improve your experience by leveraging a variety of browser extensions.

The following is a rundown of my favorite GitHub extensions on my preferred browser, Google Chrome. While I am focusing on Chrome extensions, you’ll find that there are plenty of GitHub extensions for other browsers, too.

Do you prefer a GitHub extension not listed here? Let me know in the comments!

Awesome Autocomplete for GitHub

The Awesome Autocomplete for GitHub extension, by Algolia, is by far my favorite GitHub Chrome extension.

This extension supercharges the top GitHub search bar by adding auto-completion, last active users, top public repositories, and more.

Search

You can even type aa<space> to find GitHub repositories directly from your Chrome address bar!

File Icon for GitHub, GitLab and Bitbucket

The default file icons in GitHub are so, so boring. Are you tired of seeing the default notepad icon next to every file, no matter the extension?

The File Icon for GitHub, GitLab and Gitbucket extension, by Homer Chen, adds file icons to GitHub repositories. Personally, this makes it easier for me to find certain files in folders with many files.

FileIcons

GitHub Code Folding

Any reasonable code editor gives you the ability to easily expand or collapse code blocks for readability. The GitHub Code Folding extension, by Noam Lustiger, allows you to do this inside the GitHub user interface.

CodeFolding-1

GitHub Repository Size

The GitHub Repository Size extension, by Harsh Vakharia, is another extension that is short, sweet, and useful.

The extension lists the size for an entire GitHub repository and also the size of each file.

Size

Hide Files on GitHub

The Hide Files on GitHub extension, by Sindre Sorhus, hides nonessential project files and folders from the GitHub user interface (like Yarn lock files, the .git folder, and so on). Of course, you can customize which files to ignore.

Isometric Contributions

The Isometric Contributions extension, by Jason Long, shows an isometric pixel art version of a user’s contribution chart. It also includes more granular user data, like a user’s busiest day, current streak, and longest streak.

Of course, you can easily switch between the isometric view and the default view. This extension does not have certain functionality like hovering over a day for details.

IsometricContributions

Notifier for GitHub

The Notifier for GitHub extension, by Sindre Sorhus, displays your unread GitHub notifications count. This way, you can keep track of any missed notifications even when you don’t have GitHub open.

(The screenshot grabbed from the extension’s repository.)

screenshot

Octotree

I cannot do without the Octotree extension. This extension offers an easy-to-navigate code tree, allowing for lightning-fast browsing of a repository’s files.

Octotree

Refined GitHub

The Refined GitHub extension, by Sindre Sorhus, offers a slew of features that is missing from GitHub. If you look at the repository, you’ll see that GitHub has integrated functionality that was first developed in this extension.

Some functionality includes the ability to mark issues and pull requests as unread, reaction avatars on comments, clickable references to issues and pull requests, and links to an issue’s closing commit or pull request.

Render Whitespace on GitHub

Where do you stand on the tabs vs. spaces argument? You can only pick one. (In an effort to keep all my readers, I will abstain from this one.)

The Render Whitespace on GitHub extension, by Gleb Mazovetskiy, allows you to quickly find out the formatting of a repository’s code.

Whitespace

Twitter for GitHub

When I discover a GitHub repository, I like to find out information about the author—generally this includes following the person on Twitter, which is often the best place to contact someone or and learn about them. If the author does not include their Twitter handle in his or her GitHub bio, it can be tedious and time-consuming to search on Twitter.

The Twitter for GitHub extension, by Nicolás Bevacqua, attempts to find a user’s Twitter handle and display it on a GitHub profile.

GitHubTwitter
]]>
<![CDATA[ Razor Support for ASP.NET Core Apps in Visual Studio Code ]]> https://www.daveabrock.com/2018/11/19/net-core-apps-in-visual-studio-code-now-have-razor-support/ 608c3e3df4327a003ba2fe24 Sun, 18 Nov 2018 18:00:00 -0600 The beauty of a cross-platform framework like ASP.NET Core is your ability to choose tooling you prefer. With all its advantages, Visual Studio can sometimes be too powerful for what you need.

With ASP.NET Core, the days of being locked down to writing .NET in Visual Studio are over. For example, you can write your applications on editors like Visual Studio Code, whether you are on Mac, Windows, or a flavor of Linux.

Earlier this year, I spoke and wrote extensively about how to write an ASP.NET Core app with Visual Studio Code. As this support is evolving, you may notice experiences here and there that do not compare to using a full-fledged editor like Visual Studio—but if you are using Code for your Core apps you may find the trade-off worth it.

That gap is narrowing with the announcement that Visual Studio Code now has support for Razor, the .NET markup syntax engine that allows you to write dynamic views using .NET code and HTML markup.

As discussed in the announcement, this is very much in preview and has limitations. Read the article for details on what these limitations are, how to provide feedback, and how to disable it if you come across issues.

Take advantage of Razor support in Code

Before you look at Razor support in Code, create an ASP.NET Core application in Visual Studio Code. We will just create an application based on the ASP.NET Core web application template. For an in-depth tutorial on using ASP.NET Core with Code, you can review my blog post or an in-depth tutorial at the official Microsoft Docs site.

As prerequisites, make sure you have Visual Studio Code, the C# extension, and the .NET Core SDK installed.

Create a quick .NET Core web app in Visual Studio Code

From Visual Studio Code, enter the following commands from the integrated terminal (if you don’t see it, click Terminal > New Terminal):

dotnet new webapp -o TestRazorSupport
code --reuse-window TestRazorSupport

The first command uses the .NET Core CLI to create a new application based on the webapp template. The application now exists within a TestRazorSupport folder. The second command uses a Code command-line switch to open the application in your active Code window.

Now, you can open any view file (.cshtml) to experiment with Razor support.

Explore Razor support in Visual Studio Code

If you open a view—I will be looking at Pages/Contact.cshtml—you can see Razor support in action.

First, let’s update the ContactModel with some additional properties. Here’s what my Contact.cshtml.cs now looks like:

using System;
using Microsoft.AspNetCore.Mvc.RazorPages;

namespace TestRazorSupport.Pages
{
    public class ContactModel : PageModel
    {
        public string Message { get; set; }
        public string Name { get; set; }
        public string Email { get; set; }

        public void OnGet()
        {
            Message = "Your contact page.";
            Name = "Dave Brock";
            Email = "dave@myemail.com";
        }
    }
}

In Contact.cshtml, let’s access the Name property to check out the Razor support.

Name property

It works as you would expect—even inside HTML attributes—as we access the Email property:

Email

And if we access .NET APIs (and steal from the announcement) it works beautifully!

DateTime
]]>
<![CDATA[ Share Blazor Components with Shared Class Libraries ]]> https://www.daveabrock.com/2018/11/12/using-blazor-shared-libraries/ 608c3e3df4327a003ba2fe23 Sun, 11 Nov 2018 18:00:00 -0600 UPDATE! A lot has changed in the two years since this post was first written. I’ve updated this post to make it current and correct.

As developers, we often take advantage of the benefits of sharing common code in a specific project. That way, it can be shared and maintained in a centralized way—accessed easily whether it is your team’s source control repository or, in the .NET world, a NuGet package.

This is no different with Blazor. When we use Razor component class libraries, we can easily share components across projects. When we share components, all our target project needs to do is reference the shared project and add the component—no style imports necessary!

In this article, we’ll demonstrate the power of Razor component class libraries using a simple example with the default scaffolded Blazor application.

Prerequisites

Before getting started, make sure you have a recent .NET SDK installed (preferably 3.x or later).

Create a Blazor application

Let’s create a Blazor application.

  1. From Visual Studio, click File > New > Project and select the Blazor App project template and click Next.
  2. Give it a name, like MySharedLibDemo, and click Create.
  3. Select the Blazor template (Blazor Server App or Blazor WebAssembly App) and click OK. Either type should work here.
  4. To add a Razor component class library, we’ll use the .NET Core command-line interface (CLI).
  5. Right-click the solution and select Open Command Line. From your preferred command line utility, enter dotnet new to see all the .NET Core templates available to you. We will be adding the Razor Class Library template to our project.
  6. From your prompt, enter dotnet new razorclasslib -o MySharedBlazorLibrary. This will add the MySharedBlazorLibrary project in your directory.
  7. Right-click your solution and click Add > Existing Project. Browse to your library, select the MySharedBlazorLibrary.csproj file, and click Open. Your project structure will now resemble the following.
  8. Finally, reference the shared project. From your main project, right-click Dependencies > Add Project Reference… Then, select your newly created project and click OK.

Add shared component to your main project

Now, all you need to do is add the component to your project. If you remember, the shared project includes a Component1.cshtml file that includes a styled component. We’ll now add this to our main project.

From your Pages/Index.razor file, add a using statement at the top of the file.

@using MySharedBlazorLibrary

Now, below the SurveyPrompt component, add the Component1 component. As you begin typing, you can use IntelliSense.

Autocomplete

Your Index.razor component should now look like this:

@page "/"

<h1>Hello, world!</h1>

Welcome to your new app.

<SurveyPrompt Title="How is Blazor working for you?" />

<Component1 />

View your changes

After you save your changes, reload the page to see your new component in action.

PageWithComponent-1

You have just referenced a component from a shared library with minimal effort. By merely importing the library, you were able to add a component and its styles quite easily.

]]>
<![CDATA[ GitHub tip: You don't need that .git extension to clone ]]> https://www.daveabrock.com/2018/10/27/dont-need-git-extension-to-clone/ 608c3e3df4327a003ba2fe22 Fri, 26 Oct 2018 19:00:00 -0500 If you want to clone a GitHub repository, the most common (and documented) way is to browse to it and click that green Clone or download button. In this case, I am cloning the .NET machine learning project:

CloneOrDownload

Typically, I would copy the Git repository path and clone it in a command line window, like so:

git clone https://github.com/dotnet/machinelearning.git

As it turns out, this is a waste of clicks. All you have to do is grab the URL from the address bar of your favorite browser and ignore the .git extension altogether.

Try this:

CloneWithoutExtension
]]>
<![CDATA[ JavaScript Scope Can Be Tricky ]]> https://www.daveabrock.com/2018/04/30/javascript-scope-can-be-tricky/ 608c3e3df4327a003ba2fe21 Sun, 29 Apr 2018 19:00:00 -0500 I was writing some vanilla JavaScript this weekend - it has been quite awhile since I did so. As I’ve been writing mostly C# code lately, what always gets me with JavaScript isn’t its pain points or subtle nuances. It’s the different behavior from compiled languages I’m often used to, like C# or Java. No matter how often I work in JS, as long as I’m committed to being a polyglot, it’ll trip me up to the point that it’s just a little bit of an annoyance.

Take this seemingly simple JavaScript code. What will be output to the browser console?

var myString = 'I am outside the function';
function myFunction() {
    console.log(myString);
    var myString = 'I am inside the function';
};
myFunction();

Was your guess I'm outside the function? Or was it I'm inside the function? I hate to let you down, but in either case you would be incorrect.

In your favorite browser’s developer tools, the console will log undefined. But why?

In JavaScript, functions create brand new scopes. If we condense the example a little:

function myFunction() {
    // new scope, the variable is inaccessible outside the function
    var myString = 'I am inside the function';
}
myFunction();
console.log(myString); // undefined!

Of course, if you’ve worked in JavaScript (or basically any programming language) you can’t expect to declare a variable without an assignment and expect to get anything back but null or, in JavaScript’s case, undefined when I do something like this:

var dave;
console.log(dave); // will log undefined

So, if we look at this again:

var myString = 'I am outside the function';
function myFunction() {
    console.log(myString);
    var myString = 'I am inside the function';
};
myFunction();

To take all we’ve learned, we need to be aware of how something called hoisting works. For our purposes just know this: variable declarations in JS are always hoisted to the top of the current scope – variable assignments are not.

So here, the declaration is hoisted to the top of the scope - or, in our case, the myFunction() function. So before the console.log statement in the function, you can basically envision a var myString in the first line of the function. And, of course, knowing this now it is clear to see why the logging statement will come back as undefined.

Just remember this: always define your variables at the top of the current scope. Always.

]]>
<![CDATA[ Full-Stack Development in Visual Studio Code with ASP.NET Core ]]> https://www.daveabrock.com/2018/03/05/full-stack-development-in-vs-code-with-asp-net-core/ 608c3e3df4327a003ba2fe20 Sun, 04 Mar 2018 18:00:00 -0600 The true power with ASP.NET Core is in its flexibility. With a powerful CLI, I can run an app on any platform with just a basic text editor, if I so desire. And speaking of desire: when I write code, I prefer Visual Studio Code. It’s fast, responsive, and gives me exactly what I need and nothing more. Of course, I rely on Visual Studio 2017 from time-to-time for advanced debugging and profiling. But when I’m in a code-debug-code workflow, nothing is better.

Visual Studio Code is perfect for ASP.NET Core - allowing me to write front-end and back-end code in one great lightweight environment. Unfortunately, when I talk to ASP.NET Core developers many times Visual Studio Code isn’t a consideration. Often, I hear that while Code is a great tool, it only is for those slinging front-end code. And after decades of the full-fledged Visual Studio being the only option to write .NET code, who can blame them?

Let’s get started and walk through how you can debug C# code, query databases, and more with .NET Core using Visual Studio Code.

Note: All screenshots and keyboard shortcuts are using Windows 10, but they should be very similar on other operating systems like a Mac.

Prerequisites

Before we get started, you’ll need to make sure you have the following installed on your platform of choice.

Now that we have everything set up, let’s start by creating a new ASP.NET Core web application from the ASP.NET Core command-line interface (CLI).

Create a new Core project

We’re going to create a new Core project first, using the ASP.NET Core CLI.

First, from a terminal window let’s get a list of all available templates by adding the -l flag to our dotnet new command:

dotnet new -l

Create a new Core project

You can see that we can create a new project based on many different project template types. For our purposes, any application will do - let’s go ahead and create a new ASP.NET Core MVC App by including the Short Name in the command in your terminal window.

dotnet new mvc

MVC project options

Now that we successfully created our project, we can open our new solution in Visual Studio Code. From the terminal, navigate one folder down to the project directory and enter code . from your terminal window.

Working with your Core solution in Code

When you first open the C# project, you’ll get a warning that Code needs to build required assets to the project.

Build required assets dialog

After you click Yes, you’ll see a .vscode folder added to your project, which includes the following:

  • launch.json - where Code keeps debugging configuration information
  • tasks.json - where you can define any tasks that you need run - for example, dotnet build or dotnet run.

For example, here’s what my tasks.json looks like out of the box.

{
    "version": "2.0.0",
    "tasks": [
        {
            "label": "build",
            "command": "dotnet build",
            "type": "shell",
            "group": "build",
            "presentation": {
                "reveal": "silent"
            },
            "problemMatcher": "$msCompile"
        }
    ]
}

This allows me to build my solution without needing to keep entering dotnet build from the command line. In Windows, I can hit Ctrl + Shift + B much like I can in regular Visual Studio. Try it out!

Debugging C# Code

Now, let’s take a look at how C# debugging works in Visual Studio Code. First, set a breakpoint anywhere in the application much like you would do in any other IDE (in a gutter next to the line number). In my case, I’m creating a breakpoint inside the About controller action in Controllers/HomeController.cs.

Create a breakpoint

Now, navigate to the Debug tab in Code and click the Debug button.

Debug button

Code will launch your application using http://localhost:5000.

Launch site

Next, trigger your breakpoint - for me, that means clicking the About link in the top menu so that I can enter the About action in the Home controller.

If you go back to the Debug section in Code, you’ll see all the debugging options at your disposal: accessing your variables, call stack, all your breakpoints, and even the ability to watch variables or objects.

Debugging options

Now, let’s add a watch statement for the ViewData message.

Watch statement - before

It’s null now because I haven’t executed that line of code yet. Once I step into the method, you’ll see the value of ViewData["Message"].

Watch statement - after

Now that we’ve been able to debug C#, let’s query a database!

Working with Databases in Code

So, I’ve created a new C# project and can debug it in Visual Studio Code. Pretty cool, right? But with any server-side application, you’ll want to work with a database. You can easily do this in Code - for our purposes, we’ll be working with SQL Server.

(Note: If you are working with SQL Server on a Mac, a setup guide is beyond the scope of this post. This piece should be able to assist you.)

Before we get started, we’ll need to install Microsoft’s own SQL Server extension from Visual Studio Code. Just search for mssql in the Extensions panel and you should have no problem finding it.

mssql extension result

Now that you have the extension installed, you’ll need to create a connection to your database server (this can be locally or even something hosted in Azure). To start connecting to the database, access the Command menu (Ctrl + Shift + P) and search for sql. You’ll see all the commands the extension has provided for you.

mssql command menu

Select MS SQL : Connect from the command menu and follow the prompts.

  • Server name - since I am connecting locally, I just entered (localhost)\MSSQLLocalDB. You can do this, or if hosted in Azure the mydb.database.windows.net address, or even a remote IP address.
  • Database name - enter the database you want to use (this is optional).
  • Integrated/SQL Server authentication - pick your authentication method. If you’re using a local DB, Integrated will do. For something like Azure, you’ll need SQL authentication.
  • Profile name - you can optionally enter a name for the profile.

Now that we’re set up and connected, let’s set up a database with horror movies. To do this, first create a .sql file in your project.

Create sample database

In your SQL file, create a database. Feel free to copy and paste this syntax, then execute (Ctrl + Shift + E).

CREATE DATABASE MyHorrorDB

Now, if you execute the following you should be able to see the name of your new database.

SELECT Name FROM sys.Databases

Now that we have proof of our created databases, execute the following SQL to inject your database with sample data. You should notice IntelliSense-like completion!

USE MyHorrorDB
CREATE TABLE Movies (MovieId INT, Name NVARCHAR(255), Year INT)
INSERT INTO Movies VALUES (1, 'Halloween', 1978);
INSERT INTO Movies VALUES (2, 'Psycho', 1960);
INSERT INTO Movies VALUES (3, 'The Texas Chainsaw Massacre', 1974);
INSERT INTO Movies VALUES (4, 'The Exorcist', 1973);
INSERT INTO Movies VALUES (5, 'Night of the Living Dead', 1968);
GO

Now, do a blanket SELECT to get all your movies (obviously, a blanket SELECT is not recommended for performance reasons but this is pretty harmless since we’re querying five movies).

SELECT * FROM Movies

Now you should your sample data in action!

Query results

You can do much more than query databases, though! Take a look at this document for more details.

Wrapping up

While this is just scratching the surface, I hope this convinces you to give Visual Studio Code a try for developing in ASP.NET Core when you don’t need a full-fledged IDE. And, of course, leave me a comment if you want to keep this conversation going!

Thanks to Scott Addie of Microsoft and Chris DeMars for performing a technical review of this post.

]]>
<![CDATA[ Using Anchor Links in Markdown ]]> https://www.daveabrock.com/2018/03/04/using-anchor-links-in-markdown/ 608c3e3df4327a003ba2fe1f Sat, 03 Mar 2018 18:00:00 -0600 This is a short post that might help you.

I wasn’t sure how to use anchor links in Markdown. These come in handy when you have a long post and want to link to different sections of a document for easy navigation.

The nice thing about Markdown is that it plays so well with straight HTML - so I was pleased to get it working on the first try.

First, add an anchor as regular HTML in your Markdown element. Here, it is right at a heading.

### <a id="MyHeading"></a>My Heading ###

Now, I can link it using a standard Markdown link.

Where is my [heading](#MyHeading)?

]]>
<![CDATA[ The AppCache API: Is It Worth It? ]]> https://www.daveabrock.com/2017/02/04/the-appcache-api-is-it-worth-it/ 608c3e3df4327a003ba2fe1e Fri, 03 Feb 2017 18:00:00 -0600 The Application Cache (AppCache) API allows offline access to your application by providing the following benefits:

  • Performance - Specified site resources come right from disk, avoiding any network trips
  • Availability - Users can navigate to your site when they are offline
  • Resilience - If a server breaks or something bombs and your site is inaccessible online, users can still use your offline site.

In this post, we’ll explore how to implement AppCache and investigate its benefits and many drawbacks.

Creating a manifest file

First, you’ll need to create a manifest file, a static text file that tells the browser which assets to cache for offline availability. A manifest can have three different sections (the order and frequency of these sections do not matter):

  • Cache - This default section lists the site assets that will be cached after an initial download.
  • Network - Files listed here can come from the network if they aren’t cached. Otherwise, the network isn’t used even if the user is online.
  • Fallback - A section, which is optional, that specifies pages to use if a resource is not available. The first URI is the resource, and the second is what is used if the network request fails or errors.

Keep in mind the following “gotchas” with the manifest: if the manifest file cannot be found, the cache is deleted. Also, if the manifest or a resource specified in the manifest cannot be found and downloaded, the entire offline caching process fails and the browser will keep using the old application cache.

So, when will the browser use a new cache? This occurs when a user clears their cache, a manifest file is modified, or programmatically (we’ll get to that later).

Pay special attention to the second item. A common misconception is that when any resources listed within the manifest change, they will be re-cached. That is wrong. The manifest file itself needs to change. To facilitate this, it is a common practice to leave a timestamp comment at the top of the file that you can update whenever the manifest changes, such as in my example below.

CACHE MANIFEST
# v5 2016-08-15
index.html
css/main.css
scripts/script.js
images/hanna.jpg
images/emma.jpg

NETWORK:
*

FALLBACK:
offline.html

Referencing the manifest file

Now that you’ve created the manifest file, you then need to reference it in your web page(s). To do this, you’ll need to append the manifest attribute to the opening <html> tag of any page you want cached:

<html manifest="manifest.appcache">
...
</html>

This bears repeating: the attribute must be included on every page that you want cached. The browser will not cache a page if the manifest attribute is not included on the specific page.

Using the AppCache APIs

Now that you have created the manifest file and decided which pages you want to be cached, you can now talk to the AppCache programmatically from the global JavaScript window.applicationCache object. From this object, you can call the following methods:

  • abort - kills the cache download process
  • addEventListener - registers an event handler for a specific event type
  • dispatchEvent - sends an event to a current element
  • removeEventListener - removes a handler previously registered by addEventListener
  • swapCache - swaps an old cache for a new cache
  • update - triggers an update of the existing cache only if updates are available

From the MDN documentation on AppCache, here’s how you would see if your application has an updated manifest file.

function onUpdateReady() {
  console.log('I found a new version!');
}

window.applicationCache.addEventListener('updateready', onUpdateReady);

if (window.applicationCache.status === window.applicationCache.UPDATEREADY) {
  onUpdateReady();
}

Note, as specified in the MDN documentation, that “…since a cache manifest file may have been updated before a script attaches event listeners to test for updates, scripts should always test applicationCache.status.”

The fine print

In Application Cache is a Douchebag, Jake Archibald brilliantly lays out the many limitations of the AppCache API. You should read that piece for the full details, but here are a few gotchas that I haven’t mentioned yet:

  • Files come from the cache if you’re online – you’ll first get a version of the site from your cache. After rendering, the browser then finds updates to the manifest. As noted in the article, it means the browser doesn’t have to wait for timing out connections, but it’s somewhat annoying.
  • Non-cached resources don’t load on a cached page – if you cache, for example, a web page but not an image on it, the image will not display on the page even when you are online. Really. You would get around this by adding the * to the Network section in your manifest. (However, these connections will fail anyway if you are offline.)
  • You can’t access a cached file by its parameters – accessing index.html by parameters such as index.html?parameter=value will not retrieve the cached page. It will fetch the page over the network.

As you can imagine, AppCache’s limitations have spurned a “let’s not use AppCache” movement across the Web. It has been removed from the Web standards in favor of service workers. When describing service workers, the MDN documentation summed up AppCache’s rise and fall nicely:

The previous attempt — AppCache — seemed to be a good idea because it allowed you to specify assets to cache really easily. However, it made many assumptions about what you were trying to do and then broke horribly when your app didn’t follow those assumptions exactly.

So to answer my initial question, it is not worth it; don’t use AppCache. Unless you are completely aware of the limitations and able to live with them, AppCache’s drawbacks outweigh its benefits. The community has spoken, and using local storage or service workers is the preferred approach.

]]>
<![CDATA[ Publish your localhost with the World using Localtunnel ]]> https://www.daveabrock.com/2017/02/04/publish-your-localhost-with-the-world-using-localtunnel/ 608c3e3df4327a003ba2fe1d Fri, 03 Feb 2017 18:00:00 -0600 During your development process, you may need to show off your work from a browsable URL but you aren’t quite ready for a deploy or even a check in. For example, you might want to do some easy mobile testing on your device of choice, or you may have an eager customer that wants to see your latest and greatest. Alternatively, your application might have webhooks to other services that require a public URL.

There are several options you could consider. You could use a cloud service like Microsoft Azure or Amazon Web Services, but you’ll need to register, configure, and eventually pay. Now is great, but it only serves up static files. Ngrok is full-featured and robust, but if you’re looking for a quick solution with minimal configuration you should look elsewhere.

I prefer Localtunnel and its amazing simplicity. Its simplicity should not be taken as a deficiency, as others have noted. Once I download the package, all I need to do is tell Localtunnel the port I am working on—then I get back a public URL I can share with anyone in the world.

You can get Localtunnel from a Node.js package. (If you need Node, download it from this site, and of course confirm the installation by typing node -v in your shell.)

To get started, install Localtunnel from NPM:

npm install -g localtunnel

Then, once your localhost server is running, enter the following in the shell (change your port appropriately):

lt --port 8000

That’s it! You’ll get back a randomized subdomain URL to share:

your url is: https://cqjfkqyjve.localtunnel.me

Optionally, if you’d like a friendlier subdomain, you can use the subdomain parameter to specify one:

lt --port 8000 --subdomain dave

Localtunnel then gives you a custom subdomain. You can use your desired subdomain as long as no one is using it when you are requesting yours.

your url is: https://dave.localtunnel.me

You can share this link with anyone in the world, as long as the lt session remains active.

To find out more about Localtunnel, head over to their GitHub repository. From there, you’ll also see that you can use an API as well. If you’re looking to share your localhost with little to no thinking or configuration, give Localtunnel a shot.

]]>
<![CDATA[ Exploring the Web Storage APIs ]]> https://www.daveabrock.com/2017/02/04/exploring-the-web-storage-apis/ 608c3e3df4327a003ba2fe1c Fri, 03 Feb 2017 18:00:00 -0600 For years, if we wanted to store local data we would look to using HTTP cookies—a convenient way to persist small amounts of data on a user’s machine. And we do mean small: cookies are limited to about 4KB apiece. Also, the cookie is passed along with every request and response even if it isn’t used, making for heavy HTTP messaging.

Enter the web storage APIs, a way for you to store key/value paired string values on a user’s machine. Local storage provides much more space—modern browsers support a minimum of 5MB, much more than the 4KB for cookies.

When using web storage, we can implement either localStorage or sessionStorage from the Storage object. Let’s work through how to accomplish this.

Checking web storage compatibility

Although most browsers support web storage, you still should be a good developer and verify that a user isn’t using an unusually old browser like IE6:

function isWebStorageSupported() {
   return 'localStorage' in window;
}

When to use localStorage or sessionStorage

The localStorage and sessionStorage APIs are nearly identical with one key exception: persistence.

The sessionStorage object is only available for the duration of the browser session, and is deleted automatically when the window is closed. However, it does stick around for page reloads.

Meanwhile, localStorage is persisted until it is explicitly by the site or the user. All changes made to localStorage are available for all current and future visits to the site.

Whatever the case, both localStorage and sessionStorage work well when working with non-sensitive data needed within an application since data stored in localStorage and sessionStorage can easily be read or updated from a user’s browser.

Web storage API methods and properties

The following is a list of methods and properties available on global Storage object’s localStorage and sessionStorage variables.

  • key(index) - finds a key at a given index. As with finding indexes in other code you write, you should check the length before finding an index to avoid any null or out-of-range exceptions.
  • getItem(key) - retrieve a value by using the associated key.
  • setItem(key, value) - stores a value by using the associated key. Whether you are setting a new value or updating an existing value, the syntax is the same.
  • removeItem(key) - removes a value from local storage.
  • clear() - removes all items from storage.
  • length - a read-only property that gets the number of entries being stored.

Storing objects

While you can only have string values in web storage, you can store arrays or even JavaScript objects using the JSON notation and the available utility methods, like in the following example.

var player = { firstName: 'Kris', lastName: 'Bryant' };
localStorage.setItem('kris', JSON.stringify(player));

You can then use the parse() method to deserialize the kris object.

var player = JSON.parse(localStorage.getItem('kris'));

Keeping web storage synchronized

How do you keep everything in sync when a user has multiple tabs or browser instances of your site open concurrently? To solve this problem, the web storage APIs have a storage event that is raised whenever an entry is updated (add/update/removed). Subscribing to this event can provide notifications when something has changed. This works for both localStorage and sessionStorage.

Subscribers receive a StorageEvent object that contains data about what changed. The following properties are accessible from the StorageEvent object. The storage event cannot be canceled from a callback. The event is merely a notification mechanism: it informs subscribers when a change happens.

  • key - gets the key. The key will be null if the event was triggered by clear()
  • oldValue - gets the initial value if the entry was updated or removed. Again, the value will be null if an old value did not previously exist, or if clear() is invoked.
  • newValue - gets the new value for new and updated entries. The value is null if the event was triggered by the removeItem() or clear() methods.
  • url - gets the URL of the page on the storage action
  • storageArea - gets a reference to either the localStorage or sessionStorage object

To begin listening for event notifications, you can add an event handler to the storage event as follows.

function respondToEvent(event) {
 alert(event.newValue);
}

window.addEventListener('storage', respondToChange, false);

To trigger this event, perform an operation like the following in a new tab from the same site.

localStorage.setItem('player', 'Kris');

The fine print

While web storage offers many benefits over cookies, it is not the end-all, be-all solution, and comes with serious drawbacks as others have noted.

  • Web storage is synchronous - because web storage runs synchronously, it can block the DOM from rendering while I/O is occurring.
  • No indexing or transactional features - web storage does not have indexing, which may incur performance bottlenecks on large data queries. If a user is modifying the same storage data in multiple browser tabs, one tab could potentially overwrite the value in another.
  • Web storage does I/O on your hard drive - because web storage writes to your hard drive, it can be an expensive operation depending on what your system is currently doing (virus scanning, indexing data, etc.) While you can store a lot more data in local storage, you’ll need to be cognizant of performance.
  • Persistence - if a user no longer uses a site and storage is not explicitly deleted, storage is still loaded when you start the browser session
  • First request memory loading - because browsers load data into memory, it could use a lot of memory if many tabs are utilizing web storage mechanisms.

Web storage does offer you a simple, direct way to store user data, with caveats that it can degrade performance if you do not use it wisely thanks to its synchronous, I/O nature. Much like anything else you code, understand its utility and its nuances before using it, and don’t be greedy. It very well can bite you in the rear end if you expect too much.

]]>
<![CDATA[ Exploring the Geolocation API ]]> https://www.daveabrock.com/2017/02/04/exploring-the-geolocation-api/ 608c3e3df4327a003ba2fe1b Fri, 03 Feb 2017 18:00:00 -0600 Building location-aware applications is a snap with the HTML5 Geolocation APIs. These APIs allows you to retrieve a user’s location – with the user’s permission – as a one-time request or over a period of time. This post will walk you through how to implement the Geolocation API.

Checking for support

When implementing the Geolocation APIs, the first thing you’ll want to do is see if geolocation is supported by a user’s browser. (You’ll notice that the geolocation functionality is supported by virtually all browsers, but you know what they say about assuming.) You can check by writing code that uses the in operator; this returns true if the geolocation property exists in the window’s navigator object.

function supportsGeolocation() {
    return 'geolocation' in navigator;
}

Now that you can use the Geolocation API, you can use the position.coords property to retrieve some of the following values:

  • The latitude and longitude attributes are geographic coordinates specified in decimal degrees.
  • The altitude attribute denotes the height of the position, specified in meters. If the implementation cannot provide altitude information, the value of this attribute must be null.
  • The accuracy attribute denotes the accuracy level of the latitude and longitude coordinates. It is specified in meters and must be supported by all implementations.
  • The altitudeAccuracy attribute is specified in meters. If the implementation cannot provide altitude information, the value of this attribute must be null.
  • The heading attribute denotes the direction of travel of the hosting device and is specified in degrees, where 0° ≤ heading < 360°, counting clockwise relative to the true north. If the implementation cannot provide heading information, the value of this attribute must be null. If the hosting device is stationary, then the value of the heading attribute must be NaN.
  • The speed attribute denotes the magnitude of the horizontal component of the hosting device’s current velocity and is specified in meters per second. If the implementation cannot provide speed information, the value of this attribute must be null.

From here, you can decide if you want to get a user’s current location (one-time event), or watch a current position (specified time).

Getting current location

There are many scenarios where you just want to get a user’s location once – like, where is a user’s closest movie theater or grocery store? If this is what you’re after, then you can call the getCurrentLocation method from the navigator.geolocation object.

This method sends an async request to detect the user’s position. When the position is determined, a callback function is executed. You can optionally provide a second callback function to be executed if an error occurs and also a third parameter as an options object. In the options object, you can set the following attributes:

  • enableHighAccuracy – get the best possible result, even if it takes longer (default is false)
  • timeout – timeout, in milliseconds, that the browser will wait for a response (the default is -1, meaning there is no timeout)
  • maximumAge – specifies that a cached location is acceptable, so long as it isn’t longer than the milliseconds that you specify (the default is 0, meaning a cached location is not used)

In the following example, I’ll build a simple page that asks a user to click a button that will execute a function to find a user’s location. The information will be displayed in the location div.

<html>
    <head>
        <script src="js/current-location.js"></script>
    </head>
    <body>
        <p><button onclick="findMe()">Show my location</button></p>
        <div id="location"></div>
    </body>
</html>

The following example retrieves the latitude, longitude, and accuracy of the current user, given that they give permission to access their location. After getting this information, we’ll display a Google Maps image of their location, easily accessible by using the Google Maps API, which takes latitude and longitude as parameters.

function findMe() {
  var output = document.getElementById("location");

  function success(position) {
    var latitude  = position.coords.latitude;
    var longitude = position.coords.longitude;
    var accuracy = position.coords.accuracy;

    output.innerHTML = '<ul> \
                        <li>Latitude: ' + latitude + ' degrees</li> \
                        <li>Longitude: ' + longitude + ' degrees</li> \
                        <li>Accuracy: ' + accuracy + 'm</li> \
                        </ul>';

    var img = new Image();
    img.src = "https://maps.googleapis.com/maps/api/staticmap?center=" + latitude + "," + longitude + "&zoom=13&size=300x300&sensor=false";
    output.appendChild(img);
  };

  function error() {
    output.innerHTML = "Unable to retrieve your location!";
  };

  output.innerHTML = "Getting location ...";

  var options = {
      enableHighAccuracy: true,
      timeout: 3000,
      maximumAge: 20000
  };

  navigator.geolocation.getCurrentPosition(success, error, options);
}

Monitoring current location

If you see a user’s position changing frequently, like with turn-by-turn directions,  you can set up a callback function that you call with the updated position information. You can accomplish this by using the watchPosition function. This function has the same parameters as getCurrentPosition.

The watchPosition method returns an ID that can be used to identify who or what is watching the position. Then, when you wish to stop watching a user’s location, you can use the clearWatch method which takes the ID as a parameter.

In this case, we have another simple page that has buttons to start and stop watching a user’s position. The position information will display in the message div.

<html>
    <head>
        <script src="js/jquery-3.1.0.min.js"></script>
        <script src="js/watch-position.js"></script>
    </head>
    <body>
        <div id="message"></div>
        <button id="startLocation">Start</button>
        <button id="stopLocation">Stop</button>
    </body>
</html>

In the JavaScript file, we’ll start by initializing a watchId, using jQuery to initiate click events (more on that in a second), checking to see if the API is supported, and then writing a utility method that will show the information in the message div.

var watchId = 0;

$(document).ready(function() {
    $('#startLocation').on('click', getLocation);
    $('#stopLocation').on('click', endWatch);
})

function supportsGeolocation() {
    return 'geolocation' in navigator;
}

function showMessage(message) {
    $('#message').html(message);
}

And now, here’s where the magic happens: if a user’s browser supports geolocation, we call watchPosition, which in this case take a success callback, an optional error callback, and an optional options parameter.

function getLocation() {
    if (supportsGeolocation()) {
        var options = {
            enableHighAccuracy: true
        };
        watchId = navigator.geolocation.watchPosition(showPosition, showError, options);
    }
    else {
        showMessage("Geolocation is not supported by this browser.");
    }
}

function showPosition(position) {
    var datetime = new Date(position.timestamp).toLocaleString();
    showMessage("Latitude: " + position.coords.latitude + "<br />"
              + "Longitude: " + position.coords.longitude + "<br />"
              + "Timestamp: " + datetime);
}

function showError(error) {
    switch (error.code) {
        case error.PERMISSION_DENIED:
            showMessge("User denied Geolocation access request.");
            break;
        case error.POSITION_UNAVAILABLE:
            showMessage("Location information unavailable.");
            break;
        case error.TIMEOUT:
            showMessage("Get user location request timed out.");
            break;
        case error.UNKNOWN_ERROR:
            showMessage("An unknown error occurred.");
            break;
    }
}

Finally, when the user clicks the Stop button, the clearWatch method is called and the browser stops tracking the user’s location.

function endWatch() {
    if (watchId != 0) {
        navigator.geolocation.clearWatch(watchId);
        watchId = 0;
        showMessage("Monitoring complete.");
    }
}
]]>