Mono BLOG

Blog about Microsoft technologies (.NET, ASP.NET Core, Blazor, EF Core, WPF, TypeScript, etc.)

.NET 8 Dependency Injection

in .NET

Dependency Injection (DI) is a design pattern used to decouple dependencies between components (services). It achieves this by delegating the creation and management of dependencies to an external container, rather than having components create their dependent objects directly.

In .NET, this is primarily implemented using IServiceCollection and IServiceProvider, which are now integral parts of the runtime libraries, making them universally available across the .NET platform!

1 ServiceCollection

IServiceCollection is essentially a list of ServiceDescriptor objects. A ServiceDescriptor describes a service's type, its implementation, and its lifetime.

public interface IServiceCollection : 
    IList<ServiceDescriptor>,
    ICollection<ServiceDescriptor>,
    IEnumerable<ServiceDescriptor>,
    IEnumerable;

The framework provides a set of extension methods (primarily in ServiceCollectionServiceExtensions) to help us add service descriptions to the service container.

// Example: Registering services with different lifetimes
builder.Services.AddTransient<StudentService>(); 
// Registering keyed services (new in .NET 8)
builder.Services.AddKeyedTransient<IStudentRepository, StudentRepository>("a"); 
builder.Services.AddKeyedTransient<IStudentRepository, StudentRepository2>("b");
builder.Services.AddTransient<TransientService>();
builder.Services.AddScoped<ScopeService>();
builder.Services.AddSingleton<SingletonService>();

2 ServiceProvider

IServiceProvider defines a method, GetService, which allows us to retrieve an instance of a service given its type.

public interface IServiceProvider
{
  object? GetService(Type serviceType);
}

Here's a look at the default implementation of GetService (it defaults to the root scope if no engine scope is provided):

// Simplified view
public object? GetService(Type serviceType) => 
    GetService(ServiceIdentifier.FromServiceType(serviceType), Root);

The more detailed internal implementation looks like this:

internal object? GetService(ServiceIdentifier serviceIdentifier, ServiceProviderEngineScope serviceProviderEngineScope)
{
    if (_disposed)
    {
        ThrowHelper.ThrowObjectDisposedException();
    }
  
    // Get the accessor for the requested service identifier
    ServiceAccessor serviceAccessor = _serviceAccessors.GetOrAdd(serviceIdentifier, _createServiceAccessor);
  
    // Execute resolution hook
    OnResolve(serviceAccessor.CallSite, serviceProviderEngineScope);
    DependencyInjectionEventSource.Log.ServiceResolved(this, serviceIdentifier.ServiceType);
  
    // Get the service instance using the accessor's realization function
    object? result = serviceAccessor.RealizedService?.Invoke(serviceProviderEngineScope);
    System.Diagnostics.Debug.Assert(result is null || CallSiteFactory.IsService(serviceIdentifier));
    return result;
}

The ServiceIdentifier struct essentially wraps the service type and an optional service key (introduced for .NET 8's keyed services):

internal readonly struct ServiceIdentifier : IEquatable<ServiceIdentifier>
{
    public object? ServiceKey { get; }
    public Type ServiceType { get; }
}

Clearly, service resolution is handled by serviceAccessor.RealizedService. The creation of the ServiceAccessor itself happens in CreateServiceAccessor:

private ServiceAccessor CreateServiceAccessor(ServiceIdentifier serviceIdentifier)
{
    // Get the CallSite for the service using CallSiteFactory. 
    // The CallSite represents the plan for resolving the service.
    ServiceCallSite? callSite = CallSiteFactory.GetCallSite(serviceIdentifier, new CallSiteChain());
  
    if (callSite != null)
    {
        DependencyInjectionEventSource.Log.CallSiteBuilt(this, serviceIdentifier.ServiceType, callSite);
      
        // Trigger event related to CallSite creation.
        OnCreate(callSite);

        // If the cache location is Root, it indicates a singleton service.
        if (callSite.Cache.Location == CallSiteResultCacheLocation.Root)
        {
            // Resolve the value directly from the cache.
            object? value = CallSiteRuntimeResolver.Instance.Resolve(callSite, Root);
            // Return an accessor that always returns the cached value.
            return new ServiceAccessor { CallSite = callSite, RealizedService = scope => value };
        }

        // Use the engine to create the realization function.
        Func<ServiceProviderEngineScope, object?> realizedService = _engine.RealizeService(callSite);
        return new ServiceAccessor { CallSite = callSite, RealizedService = realizedService };
    }
  
    // If no CallSite could be created, return an accessor that always returns null.
    return new ServiceAccessor { CallSite = callSite, RealizedService = _ => null };
}

2.1 ServiceProviderEngine

The ServiceProviderEngine is the execution engine responsible for resolving services within the provider. It's established when the provider is initialized. There are two main types of engines: Dynamic and Runtime. By default, the Dynamic engine is used on .NET Framework and .NET Standard 2.0.

private ServiceProviderEngine GetEngine()
{
    ServiceProviderEngine engine;

#if NETFRAMEWORK || NETSTANDARD2_0
    engine = CreateDynamicEngine();
#else
    // Use dynamic engine if dynamic code compilation is supported and not disabled
    if (RuntimeFeature.IsDynamicCodeCompiled && !DisableDynamicEngine) 
    {
        engine = CreateDynamicEngine();
    }
    else
    {
        // Fallback to Runtime engine (e.g., for AOT scenarios)
        engine = RuntimeServiceProviderEngine.Instance;
    }
#endif
    return engine;

    [UnconditionalSuppressMessage("AotAnalysis", "IL3050:RequiresDynamicCode", 
        Justification = "CreateDynamicEngine won't be called when using NativeAOT.")]
    ServiceProviderEngine CreateDynamicEngine() => new DynamicServiceProviderEngine(this);
}

Due to conflicts between .NET AOT (Ahead-of-Time compilation) and dynamic code generation techniques (like System.Reflection.Emit), the Runtime engine must be used in AOT scenarios. However, the Dynamic engine remains the default in most other cases.

The Dynamic engine utilizes Emit technology (a dynamic compilation technique). In contrast, AOT requires all code to be compiled before deployment, preventing new code generation at runtime. The Runtime engine primarily uses reflection, aiming to provide a viable solution in AOT environments without sacrificing too much performance.

Let's examine how the Dynamic engine resolves services:

// Inside DynamicServiceProviderEngine
public override Func<ServiceProviderEngineScope, object?> RealizeService(ServiceCallSite callSite)
{
    // Track the number of times the delegate is invoked
    int callCount = 0; 
    return scope =>
    {
        // First time: Resolve the service using the simpler runtime resolver.
        // This ensures the service can be resolved quickly without waiting for compilation.
        var result = CallSiteRuntimeResolver.Instance.Resolve(callSite, scope);
      
        // Second time: Initiate background compilation optimization.
        if (Interlocked.Increment(ref callCount) == 2) 
        {
            // Queue work to the thread pool without capturing the execution context.
            _ = ThreadPool.UnsafeQueueUserWorkItem(_ =>
            {
                try
                {
                    // Replace the current service accessor with one using the 
                    // compiled/optimized delegate (using Emit/Expression trees).
                    _serviceProvider.ReplaceServiceAccessor(callSite, base.RealizeService(callSite)); 
                }
                catch (Exception ex)
                {
                    DependencyInjectionEventSource.Log.ServiceRealizationFailed(ex, _serviceProvider.GetHashCode());
                    Debug.Fail($"We should never get exceptions from the background compilation.{Environment.NewLine}{ex}");
                }
            }, 
            null);
        }
        return result;
    };
}

The key idea here is "resolve-then-optimize". The first resolution uses a simple runtime resolver for quick service instance return. When the service is requested again, a background compilation process starts, generating a more efficient resolution delegate. Once compilation completes, this optimized delegate replaces the original, speeding up subsequent resolutions. This approach gradually optimizes performance without impacting application startup time.

2.2 ServiceProviderEngineScope

Enter ServiceProviderEngineScope! This class acts as the representative or instance of a service scope. Its definition shows it exposes the capabilities of a service provider and scope management:

internal sealed class ServiceProviderEngineScope : 
    IServiceScope,          // Can manage its own lifetime
    IServiceProvider,       // Can resolve services
    IKeyedServiceProvider,  // Can resolve keyed services
    IAsyncDisposable,       // Can be disposed asynchronously
    IServiceScopeFactory    // Can create nested scopes
{
    // List of services within this scope that implement IDisposable or IAsyncDisposable
    private List<object>? _disposables; 
  
    // Cache for resolved services within this scope (primarily for Scoped lifetime services)
    // Singleton services are typically cached directly on the CallSite's root cache.
    internal Dictionary<ServiceCacheKey, object?> ResolvedServices { get; } 

    // Confirms its role as the service provider for this scope
    public IServiceProvider ServiceProvider => this; 

    // Allows creating nested scopes, which are independent of this one
    // Note: Creates scopes from the RootProvider
    public IServiceScope CreateScope() => RootProvider.CreateScope(); 
  
    // Other members: IsRootScope, RootProvider, Sync object, _disposed flag...
}

Observing its GetService logic reveals its straightforward nature: it uses the root ServiceProvider (RootProvider) to resolve services, passing this (the current scope) as the engineScope. This subtly hints that the engineScope exists primarily to manage the Scoped lifetime. The ResolvedServices dictionary holds the instances of Scoped services resolved within this specific scope, ensuring they are unique within this scope instance.

public object? GetService(Type serviceType)
{
    if (_disposed)
    {
        ThrowHelper.ThrowObjectDisposedException();
    }
    // Delegates resolution to the RootProvider, passing itself as the scope context.
    return RootProvider.GetService(ServiceIdentifier.FromServiceType(serviceType), this); 
}

Another crucial method is CaptureDisposable. Services implementing disposable interfaces are added to the _disposables list.

internal object? CaptureDisposable(object? service)
{
    // If the service is the scope itself or doesn't implement IDisposable/IAsyncDisposable, ignore.
    if (ReferenceEquals(this, service) || !(service is IDisposable || service is IAsyncDisposable))
    {
        return service;
    }

    bool disposed = false;
    lock (Sync) // Synchronization lock for thread safety
    {
        if (_disposed) // If the scope is already disposed, mark for immediate disposal.
        {
            disposed = true;
        }
        else
        {
            // Initialize the list if needed and add the service.
            _disposables ??= new List<object>(); 
            _disposables.Add(service);
        }
    }

    // Don't run potentially long-running customer disposal code under the lock.
    if (disposed) // If the scope was already disposed when trying to capture.
    {
        // Dispose the service immediately.
        if (service is IDisposable disposable)
        {
            disposable.Dispose();
        }
        else 
        {
            // Handle IAsyncDisposable synchronously if needed (rare case).
            object? localService = service; 
            Task.Run(() => ((IAsyncDisposable)localService).DisposeAsync().AsTask()).GetAwaiter().GetResult();
        }
        // Throw exception because the scope is already disposed.
        ThrowHelper.ThrowObjectDisposedException(); 
    }

    return service; // Return the original service instance.
}

When an ServiceProviderEngineScope is disposed (via DisposeAsync), all the captured disposable services within its scope (i.e., those in _disposables) are also disposed, typically in reverse order of resolution.

public async ValueTask DisposeAsync()
{
    List<object>? toDispose = BeginDispose(); // Retrieves _disposables and marks the scope as disposed
    if (toDispose != null)
    {
        // Dispose services in reverse order...
        for (int i = toDispose.Count - 1; i >= 0; i--)
        {
            object disposable = toDispose[i];
            if (disposable is IAsyncDisposable asyncDisposable)
            {
                await asyncDisposable.DisposeAsync().ConfigureAwait(false);
            }
            else
            {
                ((IDisposable)disposable).Dispose();
            }
        }
    }
}

private List<object>? BeginDispose()
{
    lock (Sync)
    {
        if (_disposed) return null;
        _disposed = true; 
        // If this is the root scope and the root provider hasn't been disposed yet,
        // trigger the root provider's disposal (which handles singletons).
        if (IsRootScope && !RootProvider.IsDisposed()) RootProvider.Dispose(); 
    }
    return _disposables;
}

A common question arises: Since singleton instances aren't stored in the scope's ResolvedServices but cached on the CallSite, how are they disposed? Will they leak?

No need to worry. While singleton instances are cached on the call site, disposable singletons are still captured by the scope that first resolves them (which is typically the root scope). As seen in BeginDispose, when the root scope is disposed, it explicitly triggers the disposal of the RootProvider, which in turn handles the disposal of any captured singleton instances.

3 ServiceCallSite

The primary responsibility of a ServiceCallSite is to encapsulate the logic required to resolve a service. It can represent various resolution strategies like a constructor call, property injection (less common in MS.DI), a factory method invocation, etc. The DI system uses this abstraction to represent how a service instance should be obtained.

internal abstract class ServiceCallSite
{
    protected ServiceCallSite(ResultCache cache)
    {
        Cache = cache;
    }

    public abstract Type ServiceType { get; } // The type of service being requested
    public abstract Type? ImplementationType { get; } // The concrete implementation type (if applicable)
    public abstract CallSiteKind Kind { get; } // The kind of call site (Constructor, Factory, etc.)
    public ResultCache Cache { get; } // Caching information for the resolved instance
    public object? Value { get; set; } // Cached value (used primarily for singletons)
    public object? Key { get; set; } // Service key (for keyed services)

    // Determines if the resolved instance should be captured for disposal
    public bool CaptureDisposable => ImplementationType == null || 
        typeof(IDisposable).IsAssignableFrom(ImplementationType) || 
        typeof(IAsyncDisposable).IsAssignableFrom(ImplementationType); 
}

3.1 ResultCache

The ResultCache within a ServiceCallSite defines how the resolved result should be cached.

public struct ResultCache
{
    public CallSiteResultCacheLocation Location { get; set; } // Where to cache (Root, Scope, etc.)
    public ServiceCacheKey Key { get; set; } // Cache key (relevant for Scoped services)
}

CallSiteResultCacheLocation is an enum defining cache behaviors:

  • Root: The service instance should be cached in the root IServiceProvider. This typically means the service is a Singleton, created once per application lifetime and shared across all requests/scopes.
  • Scope: The service instance should be cached within the current scope (ServiceProviderEngineScope). For Scoped services, an instance is created once per scope and shared within that scope.
  • Dispose: Despite the name, in the context of ResultCache, this indicates the service is Transient (a new instance is created every time it's requested). The name relates to the fact that these instances still need to be tracked for disposal if they implement IDisposable/IAsyncDisposable.
  • None: No caching is performed for the service instance (e.g., for constant values).

The ServiceCacheKey struct combines the ServiceIdentifier and a Slot index, used internally to differentiate multiple implementations resolved within a scope.

internal readonly struct ServiceCacheKey : IEquatable<ServiceCacheKey>
{
    public ServiceIdentifier ServiceIdentifier { get; }
    public int Slot { get; } // Slot index (e.g., last implementation might be slot 0)
}

3.2 CallSiteFactory.GetCallSite

Let's look at how call sites are created. We saw this method called earlier in CreateServiceAccessor:

// Inside CallSiteFactory
private ServiceCallSite? CreateCallSite(ServiceIdentifier serviceIdentifier, CallSiteChain callSiteChain)
{
    // Prevent stack overflow during deep recursive resolutions
    if (!_stackGuard.TryEnterOnCurrentStack()) 
    {
        return _stackGuard.RunOnEmptyStack(CreateCallSite, serviceIdentifier, callSiteChain);
    }

    // Use a lock specific to the service identifier to ensure thread-safe call site creation.
    // This guarantees that even with concurrent requests, the call site for a given service 
    // is created only once. Example:
    // Thread 1: C -> D -> A
    // Thread 2: E -> D -> A  (Both threads need the call site for D and A)
    var callsiteLock = _callSiteLocks.GetOrAdd(serviceIdentifier, static _ => new object());
    lock (callsiteLock)
    {
        // Check for circular dependencies in the resolution chain.
        callSiteChain.CheckCircularDependency(serviceIdentifier); 

        // Attempt to create the call site using different strategies:
        // 1. Exact match for the service descriptor.
        // 2. Match an open generic definition.
        // 3. Handle IEnumerable<T> requests.
        ServiceCallSite? callSite = TryCreateExact(serviceIdentifier, callSiteChain) ??
                                   TryCreateOpenGeneric(serviceIdentifier, callSiteChain) ??
                                   TryCreateEnumerable(serviceIdentifier, callSiteChain);
        return callSite;
    }
}

Here's a simplified overview of the call site creation process:

  1. Check Cache: Look for an existing call site in the factory's cache. If found, return it immediately.
  2. Get ServiceDescriptor: Retrieve the corresponding ServiceDescriptor based on the ServiceIdentifier (for keyed services without a key, it might default to the last registered one).
  3. Create ServiceCallSite (Order Matters):
    • TryCreateExact:
      • Calculate ResultCache based on the descriptor's lifetime.
      • If an instance is already provided (ImplementationInstance), return a ConstantCallSite.
      • If a factory delegate is provided (ImplementationFactory), return a FactoryCallSite.
      • If an implementation type is provided (ImplementationType), return a ConstructorCallSite.
    • TryCreateOpenGeneric:
      • Find a ServiceDescriptor for the open generic definition.
      • Calculate ResultCache.
      • Construct the closed generic implementation type using the specific generic arguments from the ServiceIdentifier.
      • Perform AOT compatibility checks (e.g., ensure code for value type generics exists).
      • If successful, return a ConstructorCallSite.
    • TryCreateEnumerable:
      • Verify the requested type is IEnumerable<T>.
      • Perform AOT compatibility checks (e.g., ensure code for T[] exists).
      • Find all matching ServiceDescriptors for T.
      • Create call sites for each match (usually involves looping through TryCreateExact and TryCreateOpenGeneric for each descriptor).
      • Return an IEnumerableCallSite that aggregates the results.

4 CallSiteVisitor

With the above understanding, we can now explore the internals of service resolution. Resolution essentially involves the engine "visiting" the call site structure using a CallSiteVisitor. Resolving a service means visiting its corresponding call site.

// Base class for resolvers (like CallSiteRuntimeResolver)
protected virtual TResult VisitCallSite(ServiceCallSite callSite, TArgument argument)
{
    // Stack guard checks...
    if (!_stackGuard.TryEnterOnCurrentStack()) 
    {
        return _stackGuard.RunOnEmptyStack(VisitCallSite, callSite, argument);
    }

    // Branch resolution based on the call site's cache location (lifetime)
    switch (callSite.Cache.Location) 
    {
        case CallSiteResultCacheLocation.Root: // Singleton
            return VisitRootCache(callSite, argument);
        case CallSiteResultCacheLocation.Scope: // Scoped
            return VisitScopeCache(callSite, argument);
        case CallSiteResultCacheLocation.Dispose: // Transient
            return VisitDisposeCache(callSite, argument);
        case CallSiteResultCacheLocation.None: // Constant / No Cache
            return VisitNoCache(callSite, argument);
        default:
            throw new ArgumentOutOfRangeException(nameof(callSite.Cache.Location));
    }
}

For clarity, we'll focus on the Runtime resolver's logic, which uses reflection and is easier to follow than the Emit/Expression-based logic of the Dynamic engine.

4.1 VisitRootCache (Singleton)

Let's see how singleton instances are accessed:

// Inside CallSiteRuntimeResolver
protected override object? VisitRootCache(ServiceCallSite callSite, RuntimeResolverContext context)
{
    // Check if the value is already cached directly on the call site.
    if (callSite.Value is object value) 
    {
        // Value already calculated, return it directly.
        return value;
    }

    var lockType = RuntimeResolverLock.Root;
    // Singletons are associated with the root scope.
    ServiceProviderEngineScope serviceProviderEngine = context.Scope.RootProvider.Root; 

    // Use a lock on the call site object itself for thread safety.
    lock (callSite) 
    {
        // Double-checked locking: Re-check if another thread cached the value while waiting for the lock.
        if (callSite.Value is object callSiteValue) 
        {
            return callSiteValue;
        }

        // If still null, proceed to resolve the value using the main visitor logic.
        // Pass the root scope as the context.
        object? resolved = VisitCallSiteMain(callSite, new RuntimeResolverContext
        {
            Scope = serviceProviderEngine,
            AcquiredLocks = context.AcquiredLocks | lockType // Track acquired locks
        });

        // Capture the resolved service if it's disposable (tracked by the root scope).
        serviceProviderEngine.CaptureDisposable(resolved); 

        // Cache the resolved instance directly on the call site's Value property.
        callSite.Value = resolved; 
        return resolved;
    }
}

The core resolution logic happens in VisitCallSiteMain. The CallSiteKind was determined during call site creation, so we just need to handle each kind:

// Inside CallSiteRuntimeResolver (base visitor logic)
protected virtual TResult VisitCallSiteMain(ServiceCallSite callSite, TArgument argument)
{
    switch (callSite.Kind)
    {
        case CallSiteKind.Factory:
            return VisitFactory((FactoryCallSite)callSite, argument);
        case CallSiteKind.IEnumerable:
            return VisitIEnumerable((IEnumerableCallSite)callSite, argument);
        case CallSiteKind.Constructor:
            return VisitConstructor((ConstructorCallSite)callSite, argument);
        case CallSiteKind.Constant:
            return VisitConstant((ConstantCallSite)callSite, argument);
        case CallSiteKind.ServiceProvider:
            return VisitServiceProvider((ServiceProviderCallSite)callSite, argument);
        // Other kinds might exist (e.g., KeyedService)
        default:
            throw new NotSupportedException($"Call site type {callSite.GetType()} is not supported.");
    }
}

Let's examine the most common case: ConstructorCallSite. It simply involves creating an instance via its constructor. If the constructor has dependencies, those are resolved recursively first. Easy! (Other kinds like ConstantCallSite or FactoryCallSite are generally simpler).

// Inside CallSiteRuntimeResolver
protected override object VisitConstructor(ConstructorCallSite constructorCallSite, RuntimeResolverContext context)
{
    object?[] parameterValues;
    if (constructorCallSite.ParameterCallSites.Length == 0)
    {
        parameterValues = Array.Empty<object>();
    }
    else
    {
        parameterValues = new object?[constructorCallSite.ParameterCallSites.Length];
        // Recursively resolve each constructor parameter dependency.
        for (int index = 0; index < parameterValues.Length; index++)
        {
            parameterValues[index] = VisitCallSite(constructorCallSite.ParameterCallSites[index], context);
        }
    }

    try
    {
        // Invoke the constructor using reflection.
        return constructorCallSite.ConstructorInfo.Invoke(BindingFlags.DoNotWrapExceptions, binder: null, parameters: parameterValues, culture: null);
    }
    catch (Exception ex) when (ex.InnerException != null)
    {
        // Unwrap exceptions for better debugging experience
        ExceptionDispatchInfo.Capture(ex.InnerException).Throw();
        // The above line will always throw, this is just to satisfy the compiler.
        throw; 
    }
}

4.2 VisitScopeCache (Scoped)

Visiting the singleton cache involved a simple double-checked lock on the call site. Let's see if the scoped cache differs:

// Inside CallSiteRuntimeResolver
protected override object? VisitScopeCache(ServiceCallSite callSite, RuntimeResolverContext context)
{
    // Check for the rare scenario where a scoped service resolution is happening 
    // directly within the root scope (e.g., IServiceProvider injected into singleton).
    // In this case, treat it like a singleton for locking purposes.
    return context.Scope.IsRootScope ? 
        VisitRootCache(callSite, context) : 
        VisitCache(callSite, context, context.Scope, RuntimeResolverLock.Scope); // Standard scoped resolution
}

Indeed, it's different! It first checks if we are unexpectedly in the root scope (context.Scope.IsRootScope). This can happen if, for instance, IServiceProvider is injected directly into a singleton service, and that provider is later used to resolve a scoped service. In such rare cases, it falls back to VisitRootCache's locking behavior.

Most of the time, it calls VisitCache. The synchronization model in VisitCache is quite clever, using a combination of scope-level locks and tracking flags (AcquiredLocks) to prevent deadlocks during recursive resolutions involving mixed lifetimes.

// Inside CallSiteRuntimeResolver
private object? VisitCache(ServiceCallSite callSite, RuntimeResolverContext context, 
                           ServiceProviderEngineScope serviceProviderEngineScope, RuntimeResolverLock lockType)
{
    bool lockTaken = false;
    // Use the scope's internal sync object for locking.
    object sync = serviceProviderEngineScope.Sync; 
    // Get the cache dictionary specific to this scope instance.
    Dictionary<ServiceCacheKey, object?> resolvedServices = serviceProviderEngineScope.ResolvedServices; 

    // Only acquire the scope lock if it hasn't already been acquired further up the resolution chain for this scope.
    if ((context.AcquiredLocks & lockType) == 0) 
    {
        Monitor.Enter(sync, ref lockTaken);
    }

    try
    {
        // Check the scope's cache first (inside the lock).
        if (resolvedServices.TryGetValue(callSite.Cache.Key, out object? resolved))
        {
            return resolved;
        }

        // If not found in the cache, resolve using the main visitor logic.
        // Pass the current scope and update the acquired locks flag.
        resolved = VisitCallSiteMain(callSite, new RuntimeResolverContext
        {
            Scope = serviceProviderEngineScope,
            AcquiredLocks = context.AcquiredLocks | lockType 
        });

        // Capture for disposal if necessary (tracked by this specific scope).
        serviceProviderEngineScope.CaptureDisposable(resolved); 

        // Add the resolved service to this scope's cache.
        resolvedServices.Add(callSite.Cache.Key, resolved); 
        return resolved;
    }
    finally
    {
        // Release the lock only if it was acquired in this specific call frame.
        if (lockTaken) 
        {
            Monitor.Exit(sync);
        }
    }
}

The elegance here lies in the scope isolation. For non-root scopes (Scoped lifetime), locking is confined within that specific scope instance (serviceProviderEngineScope.Sync and resolvedServices). There's no need for global locks across different scopes as required for Singletons (VisitRootCache locking on callSite itself).

4.3 VisitDisposeCache (Transient)

Finally, let's look at the Transient case:

// Inside CallSiteRuntimeResolver
protected override object? VisitDisposeCache(ServiceCallSite transientCallSite, RuntimeResolverContext context)
{
    // Resolve the service using the main visitor logic...
    // ...and immediately capture it for disposal within the current scope if needed.
    // No caching occurs.
    return context.Scope.CaptureDisposable(VisitCallSiteMain(transientCallSite, context)); 
}

It's exceptionally simple. It follows the scoped resolution path (VisitCallSiteMain) but performs no caching. Every request for a transient service results in a new call to VisitCallSiteMain, and the resulting instance is captured by the current scope for potential disposal.