Having defined the desired host process groups, you can associate each user-mode driver with a particular group by adding the UserProcGroup registry entry to the device driver's registry subkey (see Table 6-3 earlier in this lesson). By default, the UserProcGroup registry entry does not exist, which corresponds to a configuration in which Device Manager loads every user-mode driver into a separate host process instance.
Binary Image Builder Configuration
As explained in Chapter 2, "Building and Deploying a Run-Time Image," the Windows Embedded CE build process relies on binary image builder (.bib) files to generate the content of the run-time image and to define the final memory layout of the device. Among other things, you can specify a combination of flags for a driver's module definition. Issues can arise if .bib file settings and registry entries do not match for a device driver. For example, if you specify the K flag for a device driver module in a .bib file and also set the DEVFLAGS_LOAD_AS_USERPROC flag in the driver's registry subkey to load the driver into the user-mode driver host process, the driver fails to load because the K flag instructs Romimage.exe to load the module in kernel space above the memory address 0x80000000. To load a driver in user mode, be sure to load the module into user space below 0x80000000, such as into the NK memory region defined in the Config.bib file for the BSP.The following .bib file entry demonstrates how to load a user-mode driver into the NK memory region:
driver.dll $(_FLATRELEASEDIR)\driver.dll NK SHQ
The S and H flags indicate that Driver.dll is both a system file and a hidden file, located in the flat release directory. The Q flag specifies that the system can load this module concurrently into both kernel and user space. It adds two copies of the DLL to the run-time image, one with and one without the K flag, and doubles in this way ROM and RAM space requirements for the driver. Use the Q flag sparingly.
Extending the above example, the Q flag is equivalent to the following:
driver.dll $(_FLATRELEASEDIR)\driver.dll NK SH
driver.dll $(_FLATRELEASEDIR)\driver.dll NK SHK
Lesson Summary
Windows Embedded CE can load drivers into kernel space or user space. Drivers running in kernel space have access to system APIs and kernel memory and can affect the stability of the system if failures occur. However, properly implemented kernel- mode drivers exhibit better performance than user-mode drivers, due to reduced context switching between kernel and user mode. On the other hand, the advantage of user-mode drivers is that failures primarily affect the current user-mode process. User-mode drivers are also less privileged, which can be an important aspect in respect to non-trusted drivers from third-party vendors.
To integrate a driver running in user mode with Device Manager running in kernel mode, Device Manager uses a Reflector service that loads the driver in a user-mode driver host process and forwards the stream function calls and return values between the driver and Device Manager. In this way, applications can continue to use familiar file system APIs to access the driver, and the driver does not need code changes regarding the stream interface API to remain compatible with Device Manager. By default, user-mode drivers run in separate host processes, but you can also configure host process groups and associate drivers with these groups by adding a corresponding UserProcGroup registry entry to a driver's registry subkey. Driver subkeys can reside in any registry location, yet if you want to load the drivers at boot time automatically, you must place the subkeys into Device Manager's RootKey, which by default is HKEY_LOCAL_MACHINE\Drivers\BuiltIn. Drivers that have their subkeys in different locations can be loaded on demand by calling the ActivateDeviceEx function.
Lesson 4: Implementing an Interrupt Mechanism in a Device Driver
Interrupts are notifications generated either in hardware or software to inform the CPU that an event has occurred that requires immediate attention, such as timer events or keyboard events. In response to an interrupt, the CPU stops executing the current thread, jumps to a trap handler in the kernel to respond to the event, and then resumes executing the original thread after the interrupt is handled. In this way, integrated and peripheral hardware components, such as system clock, serial ports, network adapters, keyboards, mouse, touchscreen, and other devices, can get the attention of the CPU and have the kernel exception handler run appropriate code in interrupt service routines (ISRs) within the kernel or in associated device drivers. To implement interrupt processing in a device driver efficiently, you must have a detailed understanding of Windows Embedded CE 6.0 interrupt handling mechanisms, including the registration of ISRs in the kernel and the execution of interrupt service threads (ISTs) within the Device Manager process.
After this lesson, you will be able to:
■ Implement an interrupt handler in the OEM adaptation layer (OAL).
■ Register and handle interrupts in a device driver interrupt service thread (IST).
Estimated lesson time: 40 minutes.
Interrupt Handling Architecture
Windows Embedded CE 6.0 is a portable operating system that supports different CPU types with varying interrupt schemes by implementing a flexible interrupt handling architecture. Most importantly, the interrupt handling architecture takes advantage of interrupt-synchronization capabilities in the OAL and thread-synchronization capabilities of Windows Embedded CE to split the interrupt processing into ISRs and ISTs, as illustrated in Figure 6-6.
Figure 6-6 IRQs, ISRs, SYSINTRs, and ISTs
Windows Embedded CE 6.0 interrupt handling is based on the following concepts:
1. During the boot process, the kernel calls the OEMInit function in the OAL to register all available ISRs built into the kernel with their corresponding hardware interrupts based on their interrupt request (IRQ) values. IRQ values are numbers that identify the source of the interrupt in the processor interrupt controller registers.
2. Device drivers can dynamically install ISRs implemented in ISR DLLs by calling the LoadIntChainHandler function. LoadIntChainHandler loads the ISR DLL into kernel memory space and registers the specified ISR routine with the specified IRQ value in the kernel's interrupt dispatch table.
3. An interrupt occurs to notify the CPU that an event requires suspending the current thread of execution and transferring control to a different routine.
4. In response to the interrupt, the CPU stops executing the current thread and jumps to the kernel exception handler as the primary target of all interrupts.
5. The exception handler masks off all interrupts of an equal or lower priority and then calls the appropriate ISR registered to handle the current interrupt. Most hardware platforms use interrupt masks and interrupt priorities to implement hardware-based interrupt synchronization mechanisms.
6. The ISR performs any necessary tasks, such as masking the current interrupt so that the current hardware device cannot trigger further interrupts, which would interfere with the current processing, and then returns a SYSINTR value to the exception handler. The SYSINTR value is a logical interrupt identifier.
7. The exception handler passes the SYSINTR value to the kernel's interrupt support handler, which determines the event for the SYSINTR value, and, if found, signals that event for any waiting ISTs for the interrupt.