Other tasks were straightforward, such as converting the struct kstat type received from Linux to the struct vattr type expected by puffs and the NetBSD kernel.
ABI Considerations. Linking objects compiled against NetBSD headers to code compiled with Linux headers is strictly speaking not correct: there are no guarantees that the application binary interfaces for both are identical and will therefore work when linked together.
Making the type 64bit on Linux made everything work. If mixing components from different NetBSD versions, care must be taken. We estimate the differences between a rump environment and a real kernel environment and the impact of the differences and provide anecdotal information on fixing several real world bugs using rump file systems. Still, users mount untrusted file systems using kernel code.
The BSD and Linux manual pages for mount warn: " It is possible for a corrupted file system to cause a crash ". Worse, arbitrary memory access is known to be possible and fixing each file system to be bullet-proof is at best extremely hard [ 32 ]. In a mounted rump file system the code dealing with the untrusted image is isolated in its own process, thus mitigating an attack. As was seen in Table 1 , the size difference between a real kernel file system and the kernel portion of puffs is considerable, about five-fold.
Since an OS usually supports more than one file system, the real code size difference is much higher. Additionally, puffs was written from ground up to deal with untrusted input. To give an example of a useful scenario, a recent mailing list posting described a problem with mounting a FAT file system from a USB stick causing a kernel crash. By using a mountable rump file system, this problem was reduced to an application core dump.
The problematic image was received from the reporter and problem in the kernel file system code was debugged and dealt with. A common approach is to first develop the algorithms in userspace and later integrate them into the kernel, but this adds an extra phase. The following items capture ways in which rump file systems are superior to any single existing method. No separate development cycle : There is no need to prototype with an ad-hoc userspace implementation before writing kernel code.
Same environment : userspace operating systems and emulators provide a separate environment. Migrating applications e. OpenOffice or FireFox and network connections there may be challenging. Since rump integrates as a mountable file system on the development host, this problem does not exist. No bit-rot : There is no maintenance cost for case-specific userspace code because it does not exist. Short test cycle : The code-recompile-test cycle time is short and a crash results in a core dump and inaccessible files, not a kernel panic and total application failures.
Userspace tools : dynamic analysis tools such as Valgrind [ 21 ] can be used to instrument the code. A normal debugger can be used. Complete isolation : Changing interface behavior for e. To give an example, support for allocating an in-fs journal was added to NetBSD ffs journaling. The author, Simon Burge, is a kernel developer who normally does not work on file systems.
He used rump and ukfs for development and described the process thusly: "Instead of rebooting with a new kernel to test new code, I was just able to run a simple program, and debug any issues with gdb.
It was also a lot safer working on a simple file system image in a file. Another benefit is prototyping. One of the reasons for implementing the 4.
Using rump file systems this can easily be done without having to split the runtime environment and pay the overhead for easy development during production use. Although it is impossible to measure the ease of development by any formal method, we would like to draw the following analogy: kernel development on real hardware is to using emulators as using emulators is to developing as a userspace program. Differences between environments rump file systems do not duplicate all corner cases accurately with respect to the kernel.
Theoretically, flushing behavior can be different if the file system code is running in userspace, and therefore bugs might be left unnoticed. On the flip-side, the potentially different behavior exposes bugs otherwise very hard to detect when running in the kernel. Rump file systems do not possess exactly the same timing properties and details of the real kernel environment.
Our position is that this is not an issue. Differences can also be a benefit. Varying usage patterns can expose bugs where they were hidden before. However, since this does not hold when using FFS through rump, the problem was triggered more easily. In fact, this problem was discovered by the author while working on the aforementioned journaling support by using rump file systems.
Another bug which triggered much more frequently by using rump file systems was a race which involved taking a socket lock in the nfs timer and the data being modified while blocking for the socket lock. This bug was originally described by the author in a kernel mailing list post entitled "how can the nfs timer work?
In our final example the kernel FAT file system driver used to ignore an out-of-space error when extending a file. The effect was that written data was accepted into the page cache, but could not be paged out to disk and was discarded without flagging an application error. The rump vnode pager is much less forgiving than the kernel vnode pager and panics if it does not find blocks which it can legally assume to be present.
This drew attention to the problem and it was fixed by the author in revision 1. Locks: Bohrbugs and Heisenbugs Next we describe cases in which rump file systems have been used to debug real world file system locking problems in NetBSD. The most reliably repeatable bugs in a kernel environment and a rump file system are ones which depend only on the input parameters and are independent of the environment and timing. This made it possible for an unprivileged user to panic the kernel with a simple program.
Both cases were reproduced by running a regular test program against a mounted rump file systems, debugged, fixed and tested. Triggering race conditions depends on being able to repeat timing details.
It was triggered when the rename source file was removed halfway through the operation. While this is a race condition, it was equally triggerable by using a kernel file system and a mounted rump file system. It was similarly debugged and dealt with. Even if the situation depends on components not available in rump file systems, using rump may be helpful.
The author wrote a patch which addressed the issue in the file system driver, but did not have a system for full testing available at that time. The suggested patch was tested by simulating the condition in rump. Later, when it was tested by another person in a real environment, the patch worked as expected. Preventing undead bugs with regression testing When a bug is fixed, it is good practice to make sure it does not resurface [ 17 ] by writing a regression test.
In the case of kernel regression tests, the test is commonly run against a live kernel. This means that to run the test, the test setup must first be upgraded with the test kernel, bootstrapped, and only then can the test be executed. In case the test kernel crashes, it is difficult to get an automatic report in batch testing. Using a virtual machine helps a little, but issues still remain.
Consider a casual open source developer who adds a feature or fixes a bug and to run the regression tests must 1 download or create a full OS configuration 2 upgrade the installation kernel and test programs 3 run tests. Most likely steps "1" and "2" will involve manual work and lead to a diminished likelihood of testing.
Standalone rump file systems are standalone programs, so they do not have the above mentioned setup complications. In addition to the test program, file system tests require an image to mount. This can be solved by creating a file system image dynamically in the test program and removing it once the test is done. If the test survives for 10 seconds without crashing, it is deemed as successful.
While these suites generally provide the functionality of POSIX command line file utilities such as ls and cp , the name and usage of each command varies from suite to suite. The fs-utils [ 33 ] suite envisioned by the author and implemented by Arnaud Ysmal has been done using standalone rump file systems.
The utilities provide file system independent command line utilities with the same functionality and usage as the POSIX counterparts. The file system type is autodetected based on the image contents.
Additionally, fs-utils provides utilities which are necessary to move data over the barrier between the host system fs namespace and the image. Like with cp , all pathnames given to it are either relative to the current working directory or absolute with respect to the root directory. This capability is commonly referred to as build.
For the system to be cross-buildable the build process cannot rely on any non-standard kernel functionality to be available, since it might not exist on a non-NetBSD build host. The canonical way to build a file system image for boot media used to be to create a regular file, mount it using the loopback driver, copy the files to the file system and unmount the image.
This required the target file system to be supported on the build host and was not compatible with the goals of build. When build. In other words, the makefs application contains the file system driver. This approach does not require privileges to mount a file system or support for the target file system in the kernel.
The original utility had support for Berkeley FFS and was implemented by modifying and reimplementing the FFS kernel code to be able to run in userspace.
This was the only good approach available at the time. The process of makefs consists of four phases: scan the source directory calculate target image size based on scan data create the target image copy source directory files to the target image In the original version of makefs all of the phases as implemented in a single C program.
Notably, phase 4 is the only one that requires a duplicate implementation of features offered by the kernel file system driver. For comparison, we have implemented makefs using kernel file system drivers for phase 4. It is currently available as an unofficial alternative to the original makefs. The exception is ISO support, for which we use the original makefs utility; the kernel CD file system driver is read-only.
Support for each was approximately lines of modification. We compare the two implementations in Table 2. As can be observed, over a third of the original effort was for implementing support for a single file system driver. Since we reuse the kernel driver, we get this functionality for free.
All of the FFS code for the rump implementation is involved in calculating the image size and was available from makefs. If code for this had not been available, we most likely would have implemented it using shell utilities. However, since determining the size involves multiple calculations such as dealing with hard links and rounding up directory entry sizes, we concluded that reusing working code was a better option.
As rump implements environment dependent code in parallel with the the kernel, the implementation needs to keep up. There are two kinds breakage: the kind resulting in compile failure and the kind resulting in non-functional compiled code. The numbers in Table 3 have been collected from version control logs from the period August - December , during which rump has been part of the official NetBSD source tree.
The commits represent the number of changes on the main trunk. The number of build fixes is calculated from the amount of commits that were done after the kernel was changed and rump not build anymore as a result. For example, a file system being changed to require a kernel interface not yet supported by rump is this kind of failure.
Commits in which rump was patched along with the kernel proper were not counted in with this figure. Similarly, functionality fixes include changes to kernel interfaces which prevented rump from working, in other words the build worked but running the code failed.
Regular bugs are not included in this figure. Unique committers represents the number of people from the NetBSD community who committed changes to the rump tree.
The most common case was to keep up with changes in other parts of the kernel. Based on observations, the most important factor in keeping rump functional in a changing kernel is educating developers about its existence and how to test it. Initially there was a lot of confusion in the community about how to test rump, but things have since gotten better. It should be kept in mind that over the same time frame the NetBSD kernel underwent very heavy restructuring to better support multicore.
As it was the heaviest set of changes over the past 15 years, the data should be considered "worst case" instead of "typical case". Figure 5: Lines of Code History For an idea of how much code there is to maintain, Figure 5 displays the number of lines of code lines in for rump in the NetBSD source tree. The count is without empty lines or comments.
Features have been added, but much of this has been done with environment independent code. Not only does this reduce code duplication, but it makes rump file systems behave closer to kernel file systems on a detailed level. There have been two steep increases in code size. The first one was in January , when all of the custom file system code written for userspace, such as namei, was replaced with kernel code. While functionality provided by the interfaces remained equivalent, the special case implementation for userspace was much smaller than the more general kernel code.
The general code also required more complex emulation. For testing rump, standalone rump file systems were used with fs-utils. Mounted rump file systems were not measured, as they mostly test the performance of puffs and its kernel cache. For the copy operations the source data was precached. The figures are the duration from mount to operation to unmount. Both file systems were aged [ 26 ]: the first one artificially by copying and deleting files.
The latter one is in daily use on the author's laptop and has aged through natural use. FFS integrity is maintained by performing key metadata operations synchronously. The results are presents in Figure 6 and Figure 7. The figures between the graphs are not directly comparable, as the file systems have a different layout and different aging. The CD image used for the large copy and the kernel source tree used for the treecopy are the same.
The file systems have different contents, so the listing figures are not comparable at all. Figure 6: FFS on a regular file buffered. The results are in line with the expectations. This difference is explained by the fact that the buffered file includes read ahead for a userspace consumer, while the kernel mount accesses the disk unbuffered. Copying the large file is The memory mapped case does not suffer as badly as the large copy, as locality is better.
The tradeoff is increased memory use. In the unbuffered case the problem of not being able to execute a synchronous write operation while an asynchronous one is in progress shows. In case of a graph parameter which is not input to a graph, this function provides a 'empty' reference into which a graph execution can write new data into. This function essentially transfers ownership of the reference from the application to the graph. If a reference outside this list is provided then behaviour is undefined.
This function dequeues references from a graph parameter of a graph. The reference that is dequeued is a reference that had been previously enqueued into a graph, and after subsequent graph execution is considered as processed or consumed by the graph. This function essentially transfers ownership of the reference from the graph to the application.
In case of a graph parameter which is input to a graph, this function provides a 'consumed' buffer to the application so that new input data can filled and later enqueued to the graph. In case of a graph parameter which is not input to a graph, this function provides a reference filled with new data based on graph execution. User can then use this newly generated data with their application.
Typically when this new data is consumed by the application the 'empty' reference is again enqueued to the graph. This function checks the number of references that can be dequeued and returns the value to the application. After vxDisableEvents is called, if vxWaitEvent..
Generation of event may need additional resources and overheads for an implementation. Hence events should be registered for references only when really required by an application. This API can be called on graph, node or graph parameter. This API enables streaming mode of graph execution on the given graph.
The node given on the API is set as the trigger node. A trigger node is defined as the node whose completion causes a new execution of the graph to be triggered.
In streaming mode of graph execution, once an application starts graph execution further intervention of the application is not needed to re-schedule a graph; i. This graph gets re-scheduled continuously until vxStopGraphStreaming is called by the user or any of the graph nodes return error during execution. After streaming mode of a graph has been started, a vxScheduleGraph should not be used on that graph by an application. This function blocks until graph execution is gracefully stopped at a logical boundary, for example, when all internally scheduled graph executions are completed.
Data Structures Macros Enumerations Functions. Extra enums. Type of graph scheduling mode. The graph attributes added by this extension. Type of event that can be generated during system execution.
Note Graph execution could still be "in progress" for rest of the graph that does not use this data reference. This event is generated every time a node within a graph completes execution.
This event is generated every time a node returns error within a graph. Note Since the application initiates user events and not the framework, the application does NOT register user events using vxRegisterEvent. Be aware that transparency or Alpha Blend mode is complex for real-time engines to render, and may behave in unexpected ways after export.
Where possible, use Alpha Clip mode instead, or place Opaque polygons behind only a single layer of Alpha Blend polygons. There is a mapping type selector across the top. Point is the recommended type for export. Texture and Vector are also supported.
The supported offsets are:. A deliberate choice of UV mapping. Any Image Texture nodes may optionally be multiplied with a constant color or scalar.
These will be written as factors in the glTF file, which are numbers that are multiplied with the specified image textures. These are not common. A single material may use all of the above at the same time, if desired. This figure shows a typical node structure when several of the above options are applied at once:.
The core glTF 2. This allows the file format to hold details that were not considered universal at the time of first publication. Not all glTF readers support all extensions, but some are fairly common. Certain Blender features can only be exported to glTF via these extensions.
The following glTF 2. It is possible for Python developers to add Blender support for additional glTF extensions by writing their own third-party add-on, without modifying this glTF add-on. For more information, see the example on GitHub and if needed, register an extension prefix. These are stored in the extras field on the corresponding object in the glTF file. Unlike glTF extensions, custom properties extras have no defined namespace, and may be used for any user-specific or application-specific purposes.
A glTF animation changes the transforms of objects or pose bones, or the values of shape keys. One animation can affect multiple objects, and there can be multiple animations in a glTF file. Imported models are set up so that the first animation in the file is playing automatically. Scrub the Timeline to see it play. When the file contains multiple animations, the rest will be organized using the Nonlinear Animation editor.
Each animation becomes an action stashed to an NLA track. The track name is the name of the glTF animation. To make the animation within that track visible, click Solo star icon next to the track you want to play. If an animation affects multiple objects, it will be broken up into multiple parts.
The part of the animation that affects one object becomes an action stashed on that object. Use the track names to tell which actions are part of the same animation. To play the whole animation, you need to enable Solo star icon for all its tracks.
You can export animations by creating actions. An action will be exported if it is the active action on an object, or it is stashed to an NLA track e. Actions which are not associated with an object in one of these ways are not exported. If you have multiple actions you want to export, make sure they are stashed! A glTF animation can have a name, which is the action name by default.
For example, the Fig. If you rename two tracks on two different objects to the same name, they will become part of the same glTF animation and will play together. In this mode, the NLA organization is not used, and only one animation is exported using the active actions on all objects. The glTF specification identifies different ways the data can be stored. The importer handles all of these ways.
The exporter will ask the user to select one of the following forms:.
0コメント