In our current project, we follow GIT rebasing strategy. Our fear was that having a branch live long will mean greater difficulty when it comes time to completing a pull request. Especially if the feature branch has substantial changes - 60 to around 100 file changes rather than under 50.
When it came time to decide whether a moderately risky change to a high impact area should wait for the end of month rush time to pass before releasing - we opted for releasing it as soon as possible. So that we won’t have to keep rebasing that branch any longer.
If we were doing merges instead of rebasing, the situation wouldn’t be any different. Instead of rebasing when new changes were added to main line branch, we would instead be merging. Nothing else changes.
There is problem leaving a branch for a long time without attending to it. However, is that something to be concerned with? Yes. The changes that were made to the feature branch might have moved or changed unrecognisably by the time you merge the feature branch to main line branch. Therefore, both rebasing and merge based strategy is against the idea of long lived feature branches.
If you have the option to choose, does that mean you should always prefer trunk based development - where you make changes directly to the main-line branch? The problem with trunk based development is that you are trading off communicating the design of the code with the rest of the team until all the tiny commits finally make it into main-line branch. I’ve found the team following trunk based development also don’t tend to put much effort into code reviewing with pull requests. If you are a very small team, or your team mainly consists of a handful of very senior developers, perhaps trunk based development is what works best for you.
I want our team of 10 - 12 developers to still review the risks and feel like we have control over when a substantial feature gets merged for release. For that, I think long lived branches that are kept up-to date frequently is the preferred approach. We have cultivated a lower tolerance for leaving branches alone for a long period. We can’t have the mental psychic weight forcing us to feel bad about not integrating back into the main line branch. Because, by having long lived branches, we will be conscientiously making a preference towards the benefits it brings us.
]]>Word of warning: These commands were tested only on windows terminal.
These instructions, will require docker installed locally. If you don’t want to install PostgreSQL locally but, wants to perform some backup and restore tasks against remote servers, this is a guide for you.
For me, I went searching because I wanted to migrate Azure PostgreSQL Single Server to an Azure PostgreSQL Flexible server as the single server offering will be retired in March 2025. And Azure wasn’t letting me do it from their portal because the source database was in a different region to my destination. And Azure wasn’t letting me create a Flexible server in the target server location for some reason. I assume because the service is not available in that region.
When using pg_dump
, the general advice is to use the same or later version of postgres. So, if you target server is PostgreSQL
version 15.4
and your source server is version 11
. You’d atleast use version 15.4
of PostgreSQL tools.
You can only migrate one database at a time with this approach. So, repeat the steps below for all the databases.
docker run -it --rm -v .:/backup postgres:15.4 pg_dump -h <source_host> `
-U <user> -d <database> -f /backup/<file_name>.sql
The command above mounts the \backup
directory in the container as a volume -v
to the host’s current directory. This allows
pg_dump
to write the backup sql file to the current directory on your local machine.
Notice -it
flag as its needed to enter the password for the user after executing the command.
The --rm
flag tells Docker to remove the container when the command is finished running.
psql
requires the database to be already in target host.
docker run -it --rm postgres:15.4 createdb <database> -h <target_host> `
-p 5432 -U <user>
If successful, you will see no output. So, make sure to check if a database is created.
Again, notice -it
as you will be prompted for password. Also, you are supposed to provide host address and credentials
for target server.
Now, you can use the psql
command and point to the .sql dump file we created earlier.
docker run -it --rm -v .:/backup postgres:15.4 psql -h <target_host> `
-U <user> -f /backup/<file_name>.sql <database>
Notice that the database name is provided as that’s what file will be executed against.
]]>When it comes to creating a stripe webhook controller on asp.net MVC, we need to verify that the request is infact coming from stripe using Stripe-Signature
value.
[Route("stripe_webhook")]
public class StripeWebHookController : Controller
{
[HttpPost]
public async Task<IActionResult> Index()
{
var json = await new StreamReader(Request.Body)
.ReadToEndAsync();
try
{
var signatureHeader = Request.Headers["Stripe-Signature"];
var stripeEvent = EventUtility.ConstructEvent(
json,
signatureHeader,
"whsec_xxxx");
switch (stripeEvent.Type)
{
case Events.CustomerSubscriptionTrialWillEnd:
{
await new EndOfTrialEmail(emailClient)
.SendAsync(stripeEvent);
break;
}
return Ok();
}
}
catch(StripeException e)
{
return BadRequest();
}
}
}
If we were to write a unit test against for this action, it might look like…
public class StripeWebHookControllerTests
{
[Fact]
public async Task Customer_subscription_trial_will_expire_sends_out_an_email()
{
var emailClient = new FakeEmailClient();
var sut = new StripeWebHookController(emailClient, ...);
var result = await sut.Index();
result.Should().BeOfType<OkResult>();
var expected = new MailMessage(
"noreply@example.org",
"user@example.com",
"Your trial ends soon",
null);
emailClient.Emails.Should().ContainEquivalentOf(
expected,
opt => opt.Including(x => x.From)
.Including(x => x.To)
.Including(x => x.Subject));
}
}
I am using FluentAssertions (v6.11.9) library for my assertion on the email that got send out.
This test is going to fail because, the Request
is null. And we haven’t got a value for Stripe-Signature
header.
var sut = new StripeWebHookController(emailClient, ...)
{
ControllerContext = new ControllerContext
{
HttpContext = new DefaultHttpContext
{
Request =
{
Headers = { ["Stripe-Signature"] = string.Empty }
}
}
}
}
The test will now start failing because the EventUtility.ConstructEvent
method won’t be able to verify the signature. What value do we provide to Stripe-Signature
header for the verification to pass?
According to stripe nodejs repository, there is a stripe.webhooks.generateTestHeaderString
method available in the library. There isn’t one as far as I can tell with the Stripe.net library.
Looking into how the validation logic is implemented, Stripe signature is computed using request body, and a unix timestamp value. The header value has the format t={timestamp},v1={signature}
. v1
is represent the schema.
This ComputeSignature
method is taken straight from stripe-dotnet repository.
private static string ComputeSignature(
string secret,
string timestamp,
string payload)
{
var secretBytes = Encoding.UTF8.GetBytes(secret);
var payloadBytes = Encoding.UTF8.GetBytes($"{timestamp}.{payload}");
using var cryptographer = new HMACSHA256(secretBytes);
var hash = cryptographer.ComputeHash(payloadBytes);
return BitConverter.ToString(hash)
.Replace("-", string.Empty).ToLowerInvariant();
}
With the ComputeSignature
method in place, let’s modify our test to create Stripe-Signature
header.
var timestamp = DateTimeOffset.UtcNow.ToUnixTimeSeconds()
.ToString();
var signature = ComputeSignature("whsec_xxx", timestamp, "{}");
...
Request.Headers = { ["Stripe-Signature"] = $"t={timestamp},v1={signature}" }
The timestamp
needs to be close to the current time because there is threshold in the validation logic.
The next exception will be on Parsing the request body. It needs to look like a Stripe Event
. And also match the ApiVersion
specified with StripeConfiguration.ApiVersion
or default version included with the library.
StripeConfiguration.ApiVersion = "2022-11-15";
var utf8Json = new MemoryStream();
await JsonSerializer.SerializeAsync(utf8Json,
new
{
type = Events.CustomerSubscriptionTrialWillEnd,
api_version = "2022-11-15",
data = new
{
@object = new
{
@object = "subscription",
customer = "cus_123"
}
},
request = new EventRequest()
});
utf8Json.Position = 0;
...
var signature = ComputeSignature("whsec_xxx", timestamp,
Encoding.UTF8.GetString(utf8Json.ToArray()));
...
Request.Body = utf8Json
And with that, your test should now start passing.
]]>You might already be using a virtual machine or dual booting in to a windows partition as part of your current development workflow.
The setup that I’ve been using for the past 1 year is slightly different. With
some pitfalls. I make changes to the codebase on my Mac using my preferred
editor. I ssh
into a windows 11 virtual machine and compile the code. Run
tests and can launch the application. I can also browse to the website if I am
developing a website from my Mac in my preferred browser.
The pitfalls are that, I don’t have intellisense support while making changes to the code in my IDE. And also, I currently have to tunnel the website port to the internet in order to browse the website from my Mac.
Vagrant is a tool that can be used to spin up a virtual machine from
configuration file usually located at the root of your GIT repository named
Vagrantfile
. Every vagrant environment requires a box. The box used in my
local development setup is the gusztavvargadr/windows-11-21h2-enterprise box.
Vagrant.configure("2") do |config|
config.vm.box = "gusztavvargadr/windows-11-21h2-enterprise"
...
end
I’ve setup the network configuration for the VM as follows.
config.vm.network "private_network", type: "dhcp",
virtualbox__intnet: true
virtualbox_intnet: true
allows the guest vm to access the host machine.
I also configure a sync folder path, so that any changes made on my mac is
replicate to the windows VM immediately. Although, this might not be necessary
as the current working directory seems to be automatically synched and available
at C:\vagrant
on the VM.
config.vm.synced_folder "./", "c:\\users\\vagrant\\code\\SampleRepo"
I also make use of Chocolatey to get some of the tools installed which I need
for development. You can make use of the provisioning stage of Vagrant to
install these tools. Below I am installing make
, pwsh
, and ngrok
.
Chocolatey is already included in the box image gusztavvargadr/windows-11-21h2-enterprise
I am using.
config.vm.provision "shell",
name: "install software",
reset: true,
powershell_args: '-ExecutionPolicy Bypass',
inline: <<-SHELL
cinst make --version=4.3 --confirm
cinst pwsh --confirm
cinst ngrok --version=3.1.0 --confirm
SHELL
Now, I can run vagrant up
to provision a Windows Virtual Machine. vagrant ssh
allows me to ssh
into the machine. I can make edit on my mac, and call
make run
in order to build and run my website project on the
Windows VM. I run ngrok http 8080
in order to expose the IIS Express website
running on port 8080
to a publically accessible url that ngrok provides.
My make run
task looks like:
iisexpress = "C:\\Program Files\\IIS Express\\iisexpress.exe"
appcmd = "C:\\Program Files\\IIS Express\\appcmd.exe"
run: build
$(appcmd) set config -section:system.webServer/httpErrors -errorMode:Detailed
$(appcmd) delete site "WebSite1"
$(appcmd) add site /name:WebSite1 /bindings:"http/*:8080:" /physicalPath:"C:\inetpub\wwwroot"
powershell "Get-ChildItem -Path C:\\inetpub\\wwwroot\\* | Remove-Item -Recurse -Confirm:$$false -Force"
powershell "Copy-Item -Path C:\vagrant\SampleRepo\WebSite\* -Destination C:\inetpub\wwwroot\ -Recurse -Force"
$(iisexpress) /config:C:\Users\vagrant\documents\iisexpress\config\applicationhost.config /site:WebSite1 /systray:false /trace:quiet
You can remote into the machine with username vagrant
and password vagrant
.
You can run your docker containers from you mac (if they are not windows containers). And access them via the host IP address. You can find the host IP address by running the command…
ipconfig
Or, what I’ve recently descovered is that, you can setup port forwarding from within the guest virtual machine.
# Port Forward SQL Server
netsh interface portproxy add v4tov4 listenport=1433 listenaddress=0.0.0.0 connectport=1433 connectaddress=10.0.2.2
In this case, I am forwarding all localhost:1433
calls to the host OS’s IP at
10.0.2.2:1433
.
Note
If you notice that your box shuts down after you vagrant up
, it is usually
because the windows machine trial period has ended. I have not been able to
covert the windows edition to a licensed version after spinning up the machine,
therefore after the trial ends had to resort to upgrading the version of the
box and reprovision a new virtual machine. So make sure any modification to the
VM is scripted so you can easily re-provision the VM every 90 days.
BUnit seems to be default testing framework for anything Blazor related. You would install it to your test project like any other nuget package installation.
dotnet add package bunit --version 1.12.6
The default project template for a Blazor Web Assembly project has a simple
<SurveyPrompty />
Blazor component which I will use to demonstrate writing
unit test in this blog post. For simplicity, lets pretend the implementation of
<SurveyPrompty />
component was…
<div class="alert alert-secondary mt-4">
<span class="oi oi-pencil me-2" aria-hidden="true"></span>
<strong>@Title</strong>
<span class="text-nowrap">
Please take our
<a target="_blank" class="font-weight-bold link-dark" href="#"
>brief survey</a
>
</span>
and tell us what you think.
</div>
@code { [Parameter] public string? Title { get; set; } }
The <SurveyPrompty />
component has 1 parameter called Title
which is then
rendered inside a <strong>
element, which is nested in <div />
with class
alert
. If we were to follow the BUnit documentation on how to write a test for the
<SurveyPrompty />
component, you’d end up with something similar to…
[Theory, InlineData("Foo"), InlineData("Bar"), InlineData("Baz")]
public void Render_Title(string title)
{
using var ctx = new TestContext();
var cut = ctx.RenderComponent<SurveyPrompt>(parameters =>
parameters.Add(p => p.Title, title));
var actual = cut.Find(".alert strong").TextContent;
Assert.Equal(title, actual);
}
The above test will pass, but there is 1 issue I have with this test. It couples
the styling, and structure of the component to the unit test. If the class
.alert
or the element <strong \>
were to be changed or replaced, the tests
would stop working.
The guiding principles at testing library is…
The more your tests resemble the way your software is used, the more confidence they can give you.
Testing library recommends avoiding using a test specific attribute, but
suggests that its way better than querying based on DOM structure or styling
css class names. It is common to use the attribute data-testid
as a means to
finding elements. I came across it first when using Cypress - End to end
testing framework. Cypress uses data-cy
, data-test
and data-testid
attributes. However, testing library seems to have a preference on
data-testid
attribute. They all serve the same purpose. You’d only use this
attribute strictly for finding elements for testing purposes and nothing else.
In our example, the <strong>
element surrounding the @Title
would have an
additional html attribute…
<strong data-testid="survey-prompt-title">@Title</strong>
With this approach, we can replace the Find()
method from previous test to
find by data-testid
attribute.
[Theory, InlineData("Foo"), InlineData("Bar"), InlineData("Baz")]
public void Render_Title_TestId(string title)
{
using var ctx = new TestContext();
var cut = ctx.RenderComponent<SurveyPrompt>(parameters =>
parameters.Add(p => p.Title, title));
var actual = cut.Find("[data-testid='survey-prompt-title']").TextContent;
Assert.Equal(title, actual);
}
This is the recommended approach according to the testing library, you’d try
to find if text passed into the Title
property was rendered by the component.
BUnit doesn’t have any direct support for this, but we can still acheive
this result by using the css selector :contains()
.
[Theory, InlineData("Foo"), InlineData("Bar"), InlineData("Baz")]
public void Render_Title(string title)
{
using var ctx = new TestContext();
var cut = ctx.RenderComponent<SurveyPrompt>(parameters =>
parameters.Add(p => p.Title, title));
cut.Find($":contains({title})");
}
Find()
will throw Bunit.ElementNotFoundException
exception when it can’t
find any elements that matches the contains()
css selector. Therefore, there
is no need for any more assertion statements. If the line doesn’t throw
exception, we’ve passed out test. We can clean this up further by creating an
extension method called VerifyTextContaining()
on IRenderedComponent
type.
public static class RenderedComponentExtensions
{
public static void VerifyTextContaining<T>(
this IRenderedComponent<T> component,
string text) where T : Microsoft.AspNetCore.Components.IComponent
=> component.Find($":contains({text})");
}
cut.VerifyTextContaining(title);
You can make the error message more useful than
Bunit.ElementNotFoundException : No elements were found that matches the selector ':contains(Foo)'
by catching the ElementNotFoundException
and using FluentAssertions
library or some other means.
Testing the behaviour rather than the structure of the elements within the
component is more resilient. Perhaps we should discourage the use of Find()
and FindAll()
methods all together in our BUnit test projects?
There is an open source project that I came across which has started implementing testing library APIs for Blazor and is called blazor-testing-library. Its fairly new and only has 2 stars at the time of writing this post.
Visit the mono download page and download the latest release for your Operating System. As the title of the blog post suggests, I am going to be using a Mac. At the time of writing, the stable version of Mono is 6.12.0.
Complete the installation following the guide on the website. Once its complete,
Mono command line tools will be available at the path
/Library/Frameworks/Mono.framework/Versions/Current/Commands
. You can
optionally add this path to your $PATH
environment variable.
I’ll be making use of make
and Makefile
in order to write my build scripts.
But, you can choose your own preferred tool. Create a Makefile
at the root of
your repository / project structure.
touch Makefile
Next, we will need a build task in the Makefile
that will compile the project
/ solution.
msbuild = "/Library/Frameworks/Mono.framework/Versions/Current/Commands/msbuild"
build:
$(msbuild) ./ExampleWebsite.sln
Notice that on the first line, I set a variable for msbuild
which points to
the msbuild
commandline tool that was added by the Mono Framework
installation. I then refer to this variable in the build task. If you’ve saved
the path to Mono installation location in your environment variable, then you
don’t need to declare a variable and can invoke msbuild
command directly from
build task. Which will be a good idea if the Makefile
and task are going to be
used from a Windows machine as well. And msbuild
command is also resolved
on that machine.
At this stage, executing the build command make build
will error with the
message that NuGet packages are missing. So, lets add a new task to restore the
solution.
restore:
$(msbuild) ./ExampleWebsite.sln /t:restore /p:RestorePackagesConfig=true
build: restore
$(msbuild) ./ExampleWebsite.sln
/t:restore
indicates to msbuild
that we want to invoke the restore
msbuild
target. /p:RestorePackagesConfig=true
is required if the project uses
packages.config
file for managing dependencies. You can omit this argument if
your .csproj
file is using the newer SDK style file. You can also see I’ve
specified that the build task depends on the new restore
task. Therefore,
calling the build task will first run the restore
task.
Mono comes with a command line tool called xsp4
that can be used to run a web
application. XSP is the Mono ASP.NET Web Server. The process provides a
minimalistic web server which hosts the ASP.NET runtime and can be used to test
and debug web applications that use the System.Web
facilities in Mono. xsp4
is not intended to be used as your production web server. Unless its used as a
means for integrating with a production web server such as Apache.
We will call the new task run_web
.
xsp4 = "/Library/Frameworks/Mono.framework/Versions/Current/Commands/xsp4"
run_web: build
$(xsp4) --verbose --applications=/:./ExampleWebsite
Again, run_web
task has a dependency on the build
task. I am also referring
to xsp4
tool via a variable that points to the actual location of the tool.
--verbose
flag prints extra mesages to the output which are useful for
debugging purposes. --applications
argument allows you to provide a list of
virtual and real directory for all applications we want to manage with this
server. The virtual and real directories are seperated by a colon. In our case,
we only specify 1 application. At the root (/
) virtual directory that is
mapped to the relative real directory where the website project is located.
I suggest first reading ASP.NET getting started page on mono project website to learn more about the web server. There are lots of other arguments that you can pass to the tool. Which are documented on the xsp4 manpages.
If you run the new task make run_web
, there will be a website running on port
9000
on localhost
which you can browse to. As the task output suggests, you
can press Enter
/ Return
key to stop the web server running.
If you have team members using more than one development operating systems, it will be a good idea to make sure your Continuous Integration pipeline builds for all the different Operating Systems. Because it will be easy for one of the team members to introduce a change that breaks the application for others.
Try to move away from .NET Framework as soon as possible into .NET 7 which is latest at the time of writing as they have cross platform support built in.
]]>If you wanted to discuss something with more than one person, it is difficult to find times when everyone involved is free. People are away from their desks at different times. They take coffee and lunch breaks at different times. When I worked from office, most of us went to get lunch together. Those with kids will be going for school runs at different times too.
I think my grudge is against scheduled repeated meetings involving more than 2 people. I don’t mind catching up with someone. Getting on a call ad-hoc for reviewing their work, pairing, or answering questions. In fact, I think its very important. You need to have regular one-to-one meetings. Which are scheduled repeated meetings between 2 people. I have a weekly catchup with my line manager and I have fortnightly catchup with each person in my team. These meetings last about 30 minutes.
When working in a Scrum team, there are a few ceremony meetings that every member of the team needs to attend. These are again, repeated and typically involves more than 1 other person.
Stand up meetings usually happen every day and involves all of the team members. And in my experience, teams usually do a good job of keeping this to a maximum of 15 minutes. Each person spends 2 minutes to give the rest of the team members an idea of “what they did the day before”, “what the planned work for today is”, and “if there are anything blocking them from progressing”.
At my current work, virtual meeting for Stand up is now replaced by a message that gets posted on a slack channel. We use a service called Geekbot, that posts the 3 questions to each member everyday at 9 o’ clock their local time. This slack app then posts the response to a public slack channel. Team members can answer these questions anytime they like. They can catch up with other peoples’ response at anytime they prefer.
We still have virtual meeting calls for Stand up on mondays and tuesdays. I find that people do read what others are working on even if its optional for them to do so. Its common for people to zone out during virtual meetings and end up not listening until its their turn to speak. And zones out again after they are done. Some people post updates with links to work items or pull requests and clear explanations. Where as, others keep their updates short. And it changes from day to day.
Sprint refinement meetings are another repeated meeting that we have once a week. Most of the times, people are not engaged enough in these sessions. Ideally, each person needs to be aware of the stories that will be discussed in the call. They need to be told of this atleast 2 days before the refinement session. The longer each person gets to think about a particular story, the better. Chances are that they will have questions prepared for the refinement. Better, they might already have the answers. The team will benefit immensely if there were alternative solutions to the work item than the one prescribed by the technical lead or architect. Without the time given ahead of the meeting, we are demanding people to think on the spot about a requirement. And find flows in the solution that the Product Owner, Business Analysts, and Technical Lead have come up with.
I think that its not possible to replace Sprint refinement meetings with asynchronous alternatives. Especially in a team that follows “traditional” Agile methodology. Which makes it more important that we utilise the full 1 hour with the team to its maximum potential. Business Analysts or Product Owners needs to communicate early with the team about stories that will be discussed in the meeting. They also need to make sure enough scenarios are captured in the acceptance criteria of the story so that developers who are interested can make sense of the requirement and the proposed solution ahead of time.
Developers needs time to think through the scenarios captured in the acceptance criteria. Are there missing criterias thats not thought through? Are assumptions made about the code or architecture thats incorrect or outdated? Does the story require a design discussion? Are there technical debts in this area of the code base that can be addressed with the work item? And therefore should be included in the estimation. The team can capture some of the tasks for the work item that’s obvious at this stage. There will be time to add more tasks as work happens on the work item.
We currently spend a lot of time in meetings designed to prepare us to give a live demo of the features we delivered in the current sprint. And this innevitably brings along with it a lot of stress to the people who has something to present. Time is wasted multiple times for each of the preparation sessions and the actual demo. Time is spend to prepare the environment for the demo and each of the practise sessions. I say instead, each team spends time every sprint on writing a blog post in the format of release notes with screenshots and video recordings. That is end user friendly and post it before the start of the following sprint. If you use Confluence for example, each space can contain its own blog. I name the blog post “🚢 Shipped in Sprint X by Team ABC”.
One of the problem with Sprint demo meetings are that no watches the recording afterwards. Except perhaps you were off work that day. Its also not possible to search for demonstration of a particular functionality across many recordings. It becomes slightly easier if you’ve got accompanying slides or audio transcripts for each demo. Its easier to visualise the progress from these blog posts every quarter or year.
Some of the meetings that demand everyone to be available at specific point in time can be replaced with an alternative. But I admit, there are few Scrum ceremonies that are important to keep as a meeting. Even if its virtual. Such as the retrospective. It might even be good to conduct retrospectives in person once in a while.
It does seem possible to completly replace all of the Scrum ceremony meetings with an alternative asynchronous counterpart. In some circumstances, it might even be required. There are certainly tools available to enable this.
]]>I’ve only applied these suggestions for a Mac. But, I think they are still valid options to consider for other Linux operating systems.
If you have a spare Windows laptop, you can set up the Windows laptop to be RDP’d into from your local network. From your Macbook, install Microsoft Remote Desktop and connect to the windows laptop.
This approach can be used to remote into any Windows Machines, including virtual machines provided by your cloud vendor. Obviously, its cheaper if you have a spare laptop sitting around.
The trouble I had with this approach was that, there was no point in having the Macbook. I was always on the Windows OS during work hours. The point of wanting to use my Macbook was so that I can use Mac OS apps for everything else that is not coding related.
I also had problems with my Windows partition, such as unexpected shut downs, blue screen of death etc. Including a wierd one where the computer crashes or becomes un-responsive when booting up the machine, it fixes automatically after force restarting the machine couple of times.
If you want to resize your hard drive in the future. I had to re-install the Windows operating system as clean slate. I’ve been told afterwards that it is possible to resize the partition without requiring a clean install. But, I haven’t tried it myself.
At the time I was running Windows 10 and Windows 11 had just released and I wasn’t allowed to upgrade to Windows 11 because of a security related hardware requirement. I am not sure if this particular issue is sorted or if you still cannot install Windows 11 with Boot Camp Assistant.
Get hold of a Windows 11 ISO disk image file and use VirtualBox (which is free) to install and run Windows 11 side by side with your Mac. When I tested out this particular approach, I found the machine to be very laggy, especially when trying to build the solution. I also found it difficult to get the virtual machine screen to go full screen. However, I think if more time was spend at fixing that particular issue, I would’ve found a solution.
An alternative to VirtualBox is Parallels which costs £99 per year for the Professional edition and £89.99 for Home and Student edition at the time of writing. One of my collegue uses Parallel and get’s the company to pay for the license, and he is happy with his setup. He noted that because its a paid software, you don’t get the lagginess you get with VirtualBox.
It seems highly probable to use Mono to compile your .NET Framework application and run it entirely from your Mac without the need for a Windows installation. However, I think it depends how much Windows dependent your application is. I almost got a very Windows dependent classic ASP.NET web application to the point of running from Mac using Mono, I hit a brick wall when the application failed at runtime compilation of an ASPX page. I wonder if you can get this entirely working for a simple enough application, even Classic ASP.NET application.
The solution I’ve ended up using and have been happy with for atleast a year is combining Vagrant and VirtualBox together.
You’d use your Mac for editing files and performing GIT operations. File changes
made on Mac is synchronised with Windows machine with the help of sync folders.
You can SSH into my VirtualBox Windows 11 machine using vagrant ssh
providing
SSH is setup and firewall configured. I have access to PowerShell after SSH’ing
and can invoke any build tasks needed to compile my application and execute the
code.
The problem with this approach is that you only get limited Intellisense or code completion from your IDE. You don’t get compile errors while you are making changes. You’ll have to wait for your build command to finish to get that.
It is also difficult to find a virtual box windows 11 image and also to apply a valid license to it. What I end up doing is recreate the machine after my trial runs out (every 60 days). So, keep all instructions written down or better yet automated so when the machine is recreated, you have minimum work. You will have to RDP into the machine to do some one off configuration changes occassionaly. And also say good bye to debugging, or atleast I am not aware of any options available for this.
Why bother? Why not give in and use Windows as your development environment?
Carefully considered constraints are good. They open up possibilities or makes redundant tasks more apparent that were perhaps previously overlooked. Is intellisense and code completion important to you? Perhaps reading documentations looking for available methods on a class, or which namespace it belongs to will make you a better programmer. I find that I can always come up with an alternative approach to debugging the application. Perhaps its writting a test, or adding some logs, or something else entirely.
]]>The example in the documentation clears all previous
registration for the type within a registry. What if you want to replace the
registration from the Container
after mapping a type through a Registry
?
One use case for this capability that I ran into recently was while writing a
test case. I wanted to make use of the StructureMap’s Registry
, but replace
one of the type registration with an in-memory implementation. Assumptions
made following the example in the documentation however did not produce the
desired outcome.
Suppose you had the following classes and Registry.
public class InMemoryWidget : IWidget { }
public class Widget : IWidget { }
public class WidgetRegistry : Registry
{
public WidgetRegistry()
{
For<IWidget>().Use<Widget>();
}
}
Notice the difference in the registry compared to the example from StructureMap documentation.
Now, to replace the IWidget
registration to use InMemoryWidget
in the test,
you might assume the following code to do the job.
var container = new Container(cfg =>
{
cfg.AddRegistry<WidgetRegistry>();
cfg.For<IWidget>().ClearAll().Use<InMemoryWidget>();
});
This doesn’t work as expected. If we printed out what the container
contains using WhatDoIHave()
, you will still see 2 registrations.
TestContext.Out.WriteLine(container.WhatDoIHave(pluginType: typeof(IWidget)));
Outputs…
================================================================================
PluginType Namespace Lifecycle Description Name
--------------------------------------------------------------------------------
IWidget Example Transient Example.Widget (Default)
Transient Example.InMemoryWidget
================================================================================
Notice how the first one (Widget
) is the default. Because, it was added first.
The one mapped after clearing all previous registration for IWidget
from our
test is appended to the list of registrations.
The reason for this I gathered is, because the previous registration was done
via a Registry
. And clearing registrations for all types of IWidget
ignores
the ones added via Registry
. The previous approach will only clear registrations
applied directly to the Container
.
In order to clear the registration from the registry in this situation…
var container = new Container(cfg =>
{
var registry = new WidgetRegistry();
registry.For<IWidget>().ClearAll().Use<InMemoryWidget>();
cfg.AddRegistry(registry);
});
You clear the registration from the instance of the registry and pass the
instance to the container configuration’s AddRegistry()
method. We make use of
the AddRegistry
overload that accepts an object instead of the type.
My reading list is a split between software engineering books, philosophy, and few fictional books. There are of course books in software engineering that are inspiring, books in fictional that I can’t put down once I start reading. However, I find the most joy in reading philosophical books even though they are only a few in my collection.
I’ve tried reading books via kindle, and iPads. I’ve even tried audiobooks. However, I always fall back to having physical books. I find myself losing concentration when listening to audiobooks. I find the eBook readers good for taking notes and searching for stuff. However, I seem to keep forgetting what books I have on there and what I’ve read already. It doesn’t give me the same amount of joy to see that I’ve read or listened to those books. I certainly need to come up with a strategy to refer back to the content in a book. I tried a couple of times to highlight sections of a book that I found helpful, but haven’t been able to carry on the habit. Or doesn’t feel sustainable.
I however revert back to my iPad mini when I want to read a book that’s probably not worth having a physical copy. Just because, I am finding it difficult to store the books I own now. I also find it easier to read on iPad while travelling and with limited lights. Also, it easier to carry around than physical books.
Reading books is one of the best decisions I’ve made in life. It has allowed me to develop new habits, helped me with living a life with meaning or at least find meaning. I have been using books as my main source of information about the world and how to live. I have generated the tendency to only half trust what I hear from people I socialize with. Unless it confirms what I read in a book somewhere.
I noticed more and more that people who are successful in life or are famous in their field, all have reading books as one thing in common. Of course, I am not the first one to notice this. I find it really useful when an interviewer asks the interviewee to list books that they recommend.
]]>