https://www.endpointdev.com/blog/tags/openafs/2009-06-23T00:00:00+00:00End Point DevGetting Started with Demand Attachhttps://www.endpointdev.com/blog/2009/06/getting-started-with-demand-attach/2009-06-23T00:00:00+00:00Steven Jenkins
<p>As OpenAFS moves towards a 1.6 release that has Demand Attach
Fileservers (DAFS), there is a need to thoroughly test Demand Attach.
Getting started can be tricky, so this article highlights the important
steps to configuring a Demand Attach fileserver.</p>
<p>OpenAFS CVS HEAD does not come with Demand Attach enabled by default,
so you’ll need to build your own binaries. You should consult the
official documentation, but the major requirement is to pass the
–enable-demand-attach-fs option to configure.
You should also note that DAFS is only supported on namei fileservers,
not inode.</p>
<p>Once you’ve built and installed the binaries, you need to be careful
to remove your existing fileserver’s bos configuration (i.e., fs)
and put a dafs one in place; e.g.,</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ bos stop localhost fs -localauth
$ bos delete localhost fs -localauth
</code></pre></div><p>Once the fs bnode is deleted, you need to install the new
binaries and create the dafs entry. You should pass your
normal command line arguments to the fileserver and volserver processes:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-plain" data-lang="plain">$ bos create localhost dafs dafs "/usr/afs/bin/fileserver -my-usual-options" \
/usr/afs/bin/volserver \
/usr/afs/bin/salvageserver /usr/afs/bin/salvager
</code></pre></div><p>Once the entry is created, the bosserver will automatically bring up the processes, so you should check the logfiles to make sure everything is ok. Note that a vos listvol will show volumes as online, even if they are only pre-attached (<em>pre-attached</em> means that the fileserver was able to read the volume header, but has not yet brought the volume fully
online). You can watch the FileLog to see when the fileserver requests a salvage be done.</p>
<p>After initial configure, build, and bos configuration, your Demand Attach fileserver is not significantly different from your normal fileserver. You create, move, back up, restore, and move volumes just as with a traditional fileserver.</p>
<p>For more details about DAFS, take a look at the <a href="https://web.archive.org/web/20090726055424/http://www.dementia.org/twiki/bin/view/AFSLore/DemandAttach">OpenAFS wiki entry</a>. Be sure to give feedback to the <a href="mailto:openafs-info@openafs.org">mailing list</a>.</p>
Why OpenAFS?https://www.endpointdev.com/blog/2009/04/why-openafs/2009-04-24T00:00:00+00:00Steven Jenkins
<p>Once you’ve understood <a href="/blog/2009/01/what-is-openafs/">what OpenAFS is</a>, you might ask “Why use OpenAFS?” There are several very good reasons to consider OpenAFS.</p>
<p>First, if you need a cross-platform network filesystem, OpenAFS is a
solid choice. While CIFS is the natural choice on Windows, and NFS is
a natural choice on Unix, OpenAFS gives a hetergeneous choice (and it
works on Mac OS X, too).</p>
<p>Setting aside which filesystem is natural for a given platform, though,
OpenAFS has a strong advantage with respect to remote access. While it’s
common to access systems remotely via a Virtual Private Network (VPN),
Secure Shell (SSH), or Remote Desktop, OpenAFS allows the actual files
themselves to be shared across a WAN, a dialup link, or a mobile device
(and since OpenAFS is cross platform, the issue of which remote sets
of remote access software to support is lessened). Having files appear
to be local to the device reduces the need for remote access systems and
simplifies access. The big win, though, is that OpenAFS’ file caching
helps performance and lessens bandwidth requirements.</p>
<p>Another reason to use OpenAFS is if you need your network filesystem to
be secure. While both CIFS and NFS have secure versions, in practice,
they are often configured to be backwards compatible to a least common
denominator and are relatively insecure. Typically, either they trust the
client to be secure (NFS), or the backwards compatibility significantly
lessens security (CIFS). While for an isolated or trusted network,
their security mechanisms may be acceptable, OpenAFS can relied on over an
untrusted network. Common practice for allowing CIFS and/or NFS accesses
over an untrusted network is to leverage a VPN, which introduces yet
another piece of software to manage. On the other hand, OpenAFS ’just
works’ over an untrusted network and it makes no assumptions about the
trustworthiness of the client.</p>
<p>Business growth often drives opening new offices. Sharing data across
those offices can be a challenge, and OpenAFS, because it was designed
to be a wide area filesystem, not just a local area filesystem, shines.
By creating a global namespace and linking the offices together, all
data in all offices can be accessed seamlessly. This can be as simple
as two offices, one central with OpenAFS servers and the other remote,
with only OpenAFS clients, or it can scale up a step to where each remote
office holds file and meta-data servers so that commonly shared local
files can be accessed more quickly. It can even scale up globally with a
more complex environment. Morgan Stanley’s environment as of Spring 2008
had around 500 servers globally, providing OpenAFS file services to tens
of thousands of Unix and Windows clients in approximately 100 offices.
No other network filesystem offers such amazing scalability.</p>
<p>Business challenges often mean closing offices, and OpenAFS’
flexibility works well here, too. Since data can be moved while
on-line, servers in an office can be migrated to a different location,
and OpenAFS clients will automatically get data from the new location,
making removal of the infrastructure in an office straightforward.</p>
<p>OpenAFS’s ability to scale down to a single office and up to a complex
global environment sets it apart from all other network filesystems.
If you need a network filesystem, why not choose OpenAFS? It will let
you grow without having to go through a filesystem switch when you find
that your current choice limits your ability to accomplish your goals.</p>
Why not OpenAFS?https://www.endpointdev.com/blog/2009/01/why-not-openafs/2009-01-28T00:00:00+00:00Steven Jenkins
<p><a href="https://www.openafs.org/">OpenAFS</a> is not always the right answer for a filesystem. While it is a good network filesystem, there are usage patterns that don’t fit well with OpenAFS, and there are some issues with OpenAFS that should be considered before adopting or using it.</p>
<p>First, if you don’t really need a network filesystem, the overhead of OpenAFS may not be worthwhile. If you mostly write data, but seldom read it across a network, then the cache of OpenAFS may hinder performance rather than help. OpenAFS might not a good place to put web server logs, for example, that get written to very frequently, but seldom read.</p>
<p>OpenAFS is neither a parallel filesystem nor a high-performance filesystem. In high-performance computing (HPC) situations, a single system (or small set of systems) may write a large amount of data, and then a large number of systems may read from that. In general, OpenAFS does not scale well for multiple parallel reads of read-write data, but it scales very well for parallel reads of replicated read-only data. Because read-only replication is not instantaneous, depending on the latencies that can be tolerated, OpenAFS may or may not be a good choice. If you need to write and immediately read gigabytes or terabytes of data, OpenAFS may not work well for you.</p>
<p>It should be noted, though, that Hartmut Reuter and others have developed
<a href="http://workshop.openafs.org/afsbpw08/talks/thu_3/OpenAFS+ObjectStorage.pdf">
extensions to OpenAFS</a> that allow for parallel access to read-write data, and their testing has shown that accesses scale linearly with the degree of parallelism. Work to integrate their extensions into core OpenAFS is ongoing.</p>
<p>Additionally, if your environment needs to leverage special-purpose high-speed networks and does not leverage IP for connectivity, then OpenAFS will not be a good choice. It only communicates over IP and does not do Infiniband or Myrinet, for example.</p>
<p>OpenAFS is also more difficult than NFS or CIFS to set up and administer. For those two products, simple configurations can be set up in minutes, often just requiring editing a few files and/or clicking on a simple GUI to ‘share’ some files.</p>
<p>OpenAFS, on the other hand, requires configuration on the client, and setup of both fileservers and the other infrastructure servers (e.g., Kerberos, the user and group management server, and the location server). Thus, OpenAFS has a higher hurdle for getting started.</p>
<p>As mentioned, OpenAFS requires <a href="http://www.kerberos.org/">Kerberos</a>. For an environment that already has Kerberos infrastructure, whether via Active Directory, MIT Kerberos, Heimdal, or another implementation, this might not be a large challenge. For an environment that does not leverage Kerberos, though, determining the right Kerberos infrastructure, the policies to manage it, and getting the implementation done can be a significant hurdle.</p>
<p>Also, as OpenAFS has its own user and group management components, the interaction of those with existing components (or lack thereof) also needs to be resolved. An organization that uses LDAP (or Active Directory), for example, might need to leverage some add-ons to more smoothly integrate with OpenAFS, or new code might need to be written to make that integration work better.</p>
<p>While both Kerberos and integration of user and group management are good system administration practices, for an organization that does not already have these practices, needing to adopt them in order to reasonably evaluate and use OpenAFS can be daunting.</p>
<p>The filesystem semantics of OpenAFS can also be a barrier to adoption. OpenAFS only uses the owner bits for Unix file permissions, for example, so the group and other bits are completely unused (OpenAFS preserves them, but just doesn’t consult them for access control). This can cause issues with software that relies on group permissions to manage access. OpenAFS uses access control lists (ACLs) to do this, which are similar to those used on Windows but do not implement the traditional Unix semantics.</p>
<p>Another semantic difference is that OpenAFS does not implement byte-range locking but only implements file-level locking. Some software (e.g., Microsoft Access) requires byte-range locking in order to work properly; thus, OpenAFS is not a good place to store Microsoft Access databases.</p>
<p>Network filesystems often have semantic differences from local filesystems, and OpenAFS is no different. For OpenAFS, the big difference for developers is that it does not implement write on commit semantics but rather write on close. In other words, when a client issues a write request, that request does not necessarily cause other clients reading the data to be aware of the new contents. Instead, OpenAFS will write on the closing of the file (or an fsync() call). While this is not necessarily specific to OpenAFS, it is a subtlety of networked filesystems that many developers may not be aware of, so they need to be more careful about checking the return status of file close() calls, and they also need to be aware of the differences so that they can properly handle an cross-system coordination if based on contents of files.</p>
<p>While OpenAFS is a solid network filesystem, there are scenarios in which OpenAFS might be too heavyweight, might not perform as well as needed, or behave differently from what is required. Understanding these issues is helpful in making a reasoned choice about a network filesystem.</p>
What is OpenAFS?https://www.endpointdev.com/blog/2009/01/what-is-openafs/2009-01-08T00:00:00+00:00Steven Jenkins
<p>A common question about OpenAFS adoption is “What is OpenAFS?” Usually,
the person asking the question is somewhat familiar with filesystems, but
doesn’t follow the technical details of various filesystems. This article
is designed to help that reader understand why OpenAFS could be a useful
solution (and understand where it is not a useful solution).</p>
<p>First, the basics. OpenAFS is an open source implementation of AFS:
from the OpenAFS <a href="https://www.openafs.org/">website</a>, OpenAFS
is a heterogeneous system that “offers client-server architecture for
federated file sharing and replicated read-only content distribution,
providing location independence, scalability, security, and transparent
migration capabilities”.</p>
<p>Let’s break that down:</p>
<p>First, OpenAFS is extremely cross-platform. OpenAFS clients exist for
small devices (e.g., the Nokia tablet) up to mainframes. Do you want
Windows with that? <a href="https://www.openafs.org/windows.html">Not a
problem</a>. On the other hand, OpenAFS servers are primarily available
on Unix-based platforms. Implementations of OpenAFS servers for Windows
do exist, but they are not recommended or supported (If you’d like to
change that, you are welcome to submit patches or to hire developers to
make that change. That’s a major advantage of an open source project.).</p>
<p>The second part of OpenAFS is rather straightforward: it is a
client-server distributed file system. Much like SMB/CIFS in the Windows
world, and NFS in the Unix world, OpenAFS lets file accesses take place
over a network. One feature that sets OpenAFS apart from CIFS and NFS,
though, is its strong file consistency semantics based on its use of
client-side caching and callbacks. Client-side caching lets clients
access data from their local cache without going across the network for
every access.</p>
<p>Other distributed filesystems allow this as well, but OpenAFS is rather
unusual in that it guarantees that the clients will be notified if
the file changes. This caching plus the consistency guarantees make
OpenAFS especially useful across wide-area networks, not just local area
networks. With respect to consistency, most other distributed filesystems
use timeouts and/or some kind of FIFO or LRU algorithm for determining how
a client handles content in a cache. OpenAFS uses callbacks, which are
a promise from the file server to the client that if the file changes,
the server will contact the client to tell the client to invalidate the
cached contents. That notion of callbacks gives OpenAFS a much stronger
consistency guarantee than most other distributed filesystems.</p>
<p>Another unusual feature in OpenAFS is that it provides a mechanism
for replicated access to read-only data, without requiring any special
hardware or additional high-availability or replication technology. In a
sense, OpenAFS can be considered an inexpensive way to get a read-only
SAN. OpenAFS does this by classifying data as read-write or read-only,
and providing a mechanism to create replicas of read-only data. Up to
11 replicas of data can be made, allowing read access to be very widely
distributed.</p>
<p>The last four features mentioned in the website description are also
very interesting: location independence, scalability, security, and
transparent migration.</p>
<p>OpenAFS provides location independence by separating information about
where a file resides from the actual filesystem itself. This allows
separation of name service from file service, which lets OpenAFS scale
better. It also provides some functionality not present in other networked
filesystems in that changing the location of the data can be more easily
done. Because of the layer of indirection, OpenAFS is able to make a
copy of data behind the scenes, and after that data has been migrated,
to then update the location information. This allows for transparent
migration of data.</p>
<p>Because the location of data is separate from the data itself, if some of
the data is found to be more heavily used, that data can be migrated to a
separate server, so as to better balance out the accesses across multiple
servers. This can be done without negatively impacting the users. This
kind of feature is not usually found in networked filesystems but only
in either higher-end proprietary Network Attached Storage (NAS) systems,
or in Storage Area Networks (SANs).</p>
<p>Because of OpenAFS’s use of client-side caching, read-only data, and
separation of location information from the filesystem itself, OpenAFS
can scale up quite well. The initial design of AFS was to be at least
10 times more scalable than the implementations of NFS at that time,
with a client to server ratio of 200:1. While client to server ratios
are highly dependent on hardware and filesystem access patterns, 200:1
is still easily achievable, and much higher ratios have been leveraged
in production environments. 600:1 is achievable in an environment where
the data is predominately read-only.</p>
<p>OpenAFS provides built-in security by leveraging Kerberos to provide
authentication services. The servers themselves rely on Kerberos to ensure
that a rogue host cannot successfully masquerade as an OpenAFS server,
even if DNS is compromised. OpenAFS itself is agnostic with respect to
what kind of Kerberos server is used, as long as it supports the Kerberos
5 protocol standards: a Windows Kerberos Domain Controller can provide
the Kerberos services for an OpenAFS installation, as can an MIT KDC or
a Heimdal one.</p>
<p>Additionally, traffic between the clients and servers can be encrypted
by OpenAFS itself (i.e., not just with SSH or VPN encryption). This can
provide an extra layer of security.</p>
<p>Overall, OpenAFS provides some of the features of traditional network
filesystems like CIFS and NFS, but with better scalability, consistency
and security. Additionally, because of its ability to replicate and
transparently migrate data, OpenAFS can be leveraged much like a SAN,
but without the proprietary tie-ins to hardware.</p>
Google Sponsored AFS Hack-A-Thonhttps://www.endpointdev.com/blog/2008/11/google-sponsored-afs-hack-thon/2008-11-13T00:00:00+00:00Max Cohan
<p>Day One:</p>
<p>Woke up an hour early, due to having had a bit of confusion as to the start time (the initial email was a bit optimistic as to what time AFS developers wanted to wake up for the conference).</p>
<p>Met up with Mike Meffie (an AFS Developer of <a href="https://www.sinenomine.net/">Sine Nomine</a>) and got a shuttle from the hotel to the ‘Visitors Lobby’; only to find out that <strong>each building</strong> has a visitors lobby. One neat thing, Google provides free bikes (beach cruisers) to anyone who needs them. According to the receptionist, any bike that isn’t locked down is considered public property at Google. However, it’s hard to pedal a bike and hold a briefcase; so off we went hiking several blocks to the correct building. Mike was smart enough to use a backpack, but hiked with me regardless.</p>
<p>The food was quite good, a reasonably healthy breakfast including fresh fruit (very ripe kiwi, and a good assortment). The coffee was decent as well! After much discussion, it was decided that Mike & I would work towards migrating the community CVS repository over to Git. Because Git sees the world as ‘patch sets’ instead of just individual file changes, migrating it from a view of the ‘Deltas’ makes the most sense. The new Git repo. (when complete) should match 1:1 to the Delta history. There was a good amount of teasing as to whether Mike and I could make any measurable progress in 2 days. Derrick was able to provide pre-processed delta patches and the bare CVS repo. (though we spent a good amount of the day just transferring things around and determining what machine should be used for development).</p>
<p>Lunch (rather tasty sandwiches) and after lunch snacks were provided; Google definitely doesn’t skimp on the catering. Made good progress for one day of combined work, we now have a clear strategy for processing the deltas and initial code that is showing strong promise. Much teasing ensued that Mike & I should not be allowed to eat if we did not have the Git repo. ready for use. Dinner was a big group affair of food, beer, and Kerberos.</p>
<p>Day Two:</p>
<p>After arriving with Mike Meffie via the shuttle, we found out that Tom Keiser (also of Sine Nomine) had been left behind! The shuttle driver was kind enough to go pick up Tom (who ended up at a related, but <strong>different</strong> hotel than the conference recommended) and bring him for questioning (or development, as the case may be). Determined that the major issue in applying the deltas was simply due to inconsistencies in what the ‘base’ import should consist of… After several rounds of cleanup, all but a few of the deltas (and those were fixed by hand) applied cleanly!</p>
<p>On the food side, Google outdid itself with these cornbread ‘pizzas’ that were extremely good. Once we started having a few branches to play with, things came together quickly… generating much buzz and excitement (at least, for us). We all split off for dinner, with a few of us escorting Tom to his train then getting some Indian food (on a rather busy day, as it was the ‘Festival of Lights’).</p>
<p>In Conclusion:</p>
<p>We were able to get a clean specification <strong>with consensus</strong> for how we want to produce the public Git repository. The specifications are even available on the <a href="https://www.dementia.org/twiki/bin/view/AFSLore/OpenAFSCVSToGitConversion">OpenAFS wiki</a>. The tools (found at ‘/afs/sinenomine.net/public/openafs/projects/git_work/’) to produce this repo. are all in a rough working form, with only the ‘merge’ tool still needing some development effort. All of these efforts were definitely facilitated by Google providing a comfortable work environment, a solid internet connection and good food to keep us fueled through it all.</p>
<p>Things to do now:</p>
<ul>
<li>Clean up and document the existing tools</li>
<li>Improve the merge process to simplify folding the branches</li>
<li>Actually produce the Git repository</li>
<li>Validate the consistency of the Git repository against the CVS repository</li>
<li>Determine how tags are to be ported over and apply them</li>
<li>Publish repo. publicly</li>
</ul>
Ohio Linux Fest AFS Hackathonhttps://www.endpointdev.com/blog/2008/10/ohio-linux-fest-afs-hackathon/2008-10-24T00:00:00+00:00Steven Jenkins
<p>The one-day <a href="https://ohiolinux.org/">Ohio Linux Fest</a> AFS Hackathon flew by in a hurry. Those new to OpenAFS got started converting some commented source code into Doxygen-usable format to both improve the code documentation as well as get a feel for some of the subsystems in OpenAFS. Several of the developers took advantage of the time to analyze some outstanding issues in Demand Attach (DAFS). We also worked on some vldb issues and had several good conversations about AFS roadmaps, Rx OSD, the migration from CVS to Git, and the upcoming Google-sponsored AFS hackathon.</p>
<p>The Doxygen gave those new to OpenAFS code a chance to look under the covers of OpenAFS. <a href="http://www.stack.nl/~dimitri/doxygen/">Doxygen</a> produces pretty nice output from simple formatting commands, so it’s really just a matter of making comments follow some basic rules. Sample Doxygen output (from some previous work) can be seen <a href="https://web.archive.org/web/20081028152756/http://charles.endpoint.com/doxygen/html/ubik_8c.html">here</a>, and some of the new Doxygen changes made to OpenAFS are already.</p>
<p>The Demand Attach work focused on the interprocess communications pieces, namely the FSSYNC & SALVSYNC channels, specifying requirements and outlining the approaches for implementing bi-directional communications so that the failure of one process would not leave a volume in an indeterminate state. Some coding was done to address some specific locking issues, but the design and implementation of better interprocess volume state management is still an open issue.</p>
<p>The OpenAFS Roadmap discussion revolved around 3 major pieces: CVS to Git conversion, Demand Attach, and Rx OSD. DAFS is in the 1.5.x branch currently, but Rx OSD is not. The general consensus was that DAFS plus some of Rx OSD might be able to go into a stable 1.6 release in Q1 of 2009, which would also let the Windows and Unix stable branches merge back together.</p>
<p>However, the major goal in the short term is to get the CVS to Git migration done to make development more streamlined. Derrick Brashear, Mike Meffie, and Fabrizio Manfredi are all working on this.</p>
<p>The 1.6 merge, DAFS, and Rx OSD are all still very much works-in-progress in terms off getting them into a stable release together. While individually, DAFS and Rx OSD have been used by some OpenAFS installations in production, there is a lot more work to be done in terms of getting them integrated into a stable OpenAFS release.</p>
<p>Overall, the hackathon went very well, with some new AFS developers trained, and some progress made on existing projects. Many thanks to the Ohio Linux Fest for their support, and to Mike Meffie specifically for his efforts in coordinating the hackathon.</p>
Know your tools under the hoodhttps://www.endpointdev.com/blog/2008/09/know-your-tools-under-hood/2008-09-11T00:00:00+00:00David Christensen
<p>Git supports many workflows; one common model that we use here at End Point is having a shared central bare repository that all developers clone from. When changes are made, the developer pushes the commit to the central repository, and other developers see the relevant changes on subsequent pulls.</p>
<p>We ran into an issue today where after a commit/push cycle, suddenly pulls from the shared repository were broken for downstream developers. It turns out that one of the commits had been created by root and pushed to the shared repository. This worked fine to push, as root had read-write privileges to the filesystem, however it meant that the loose objects which the commit created were in turn owned by root as well; fs permissions on the loose objects and the updated refs/heads/branch prevented the read of the appropriate files, and hence broke the pull behavior downstream.</p>
<p>Trying to debug this purely on the reported messages from the tool itself would have resulted in more downtime at a critical time in the client’s release cycle.</p>
<p>There are a couple of morals here:</p>
<ul>
<li>Don’t do anything as root that doesn’t need root privileges. :-)</li>
<li>Understanding how git works at a low level enabled a speedy detection of the (<em>ahem</em>) root cause of the problem and led to quick correction of the underlying permissions/ownership issues.</li>
</ul>
OpenAFS Workshop 2008https://www.endpointdev.com/blog/2008/08/openafs-workshop-2008/2008-08-13T00:00:00+00:00Steven Jenkins
<p>This year’s <a href="http://workshop.openafs.org/afsbpw08/">Kerberos and OpenAFS Workshop</a> was very exciting. It was the first I’ve attended since the workshop was large enough to be held separately from USENIX LISA, and it was encouraging to see that this year’s workshop was the largest ever, with well over 100 in attendance, and over 10 countries represented. Jeff Altman of <a href="http://www.secure-endpoints.com">Secure Endpoints</a> did a great job on coordinating the workshop. Kevin Walsh and others at New Jersey Institute of Technology did a fantastic job in hosting, providing the workshop with a good venue and great service.</p>
<p>My summary of the workshop is “energy and enthusiasm” as several projects that have been in the development pipeline are starting to bear fruit.</p>
<p>On the technical side, the workshop keynote kicked off the week with a presentation from Alistair Ferguson from Morgan Stanley, where he noted that the work on demand attach file servers has reduced their server restart times from hours, down to seconds, greatly easing their administrative overhead while making AFS even more highly-available.</p>
<p>Of particular technical note, Jeff Altman reported that the Windows client has had lots of performance and stability changes, with major strategic changes being delivered later this year. Specifically, support for Unicode objects is coming in June, support for disconnected operation is coming in the Fall, and a long-awaited native file system driver will be delivered in December. This work will combine to make the Windows client not just a full-featured AFS client, but also a more solid Windows application.</p>
<p>Hartmut Reuter presented another exciting development work: Object Storage for AFS. This extension to both the AFS client and file server allows for AFS data to be striped across multiple servers (thus allowing for higher network utilization) as well as mirrored (giving higher availability). While this work is not yet in OpenAFS, it is in production at <a href="https://home.cern/">CERN</a> and <a href="https://www.kth.se/">KTH</a>, and work is underway to integrate it into an OpenAFS release.</p>
<p>A major organizational boost was discussed during the workshop: OpenAFS was accepted as a sponsoring organization in the Google Summer of Code and received support for 6 students. Among other projects, these students will be working on support for disconnected operations, enhancements to the Windows client, and improving the kafs implementation of the AFS client sponsored by Red Hat.</p>
<p>The most significant announcement at the workshop is that work is underway to create an organizational entity to support OpenAFS. The OpenAFS Elders have announced the intention to have a 501(c)(3) corporation started in July that will serve as the legal entity behind OpenAFS. From a code standpoint, the licensing of OpenAFS will not change, but from an operational standpoint, people will be able to donate goods, services, and intellectual property to OpenAFS, something that is not currently possible. The foundation will not offer support services as there are currently several companies doing so, but it will be focused on the non-profit components of AFS.</p>
<p>There were several other very interesting talks at the workshop, but the overall message was clear: users and developers are extending OpenAFS and keeping it fresh and viable as the distributed filesystem of choice.</p>