≡ Menu

An NSDL Retrospective: The Case of the Instructional Architect by Mimi Recker

Conclusions

Reflecting over the past several years, we see clear changes in the purpose and vision of the NSDL, from its original conception as a union catalog library to one that is more collaborative, contextual, and participatory (Lagoze, Krafft, Payette, & Jesuroga, 2005). Indeed, we argue that progress toward this “new” NSDL has been hampered by the name ‘library’, as it perhaps conveys more traditional and static notions.

These changes in the NSDL are enabled by the new underlying architecture, FEDORA. This architecture allows the expression of a range of relationships between digital objects. However, from a ‘services’ viewpoint, we believe that the NSDL continues to be primarily driven by technical concerns, leaving out the voices of the users. This is especially true of the K-12 world, which has its own complex set of problems. Much research in schools has documented wide disparities in funding, infrastructure, and vision. Moreover, it shows a wide disparity in the range of teacher literacy skills with regards to new information technologies. In our work, we have noted that while teachers are generally quick to catch the NSDL’s vision, profound changes in their practices is much more elusive.

As an example, we have considered using the NSDL blogging tool, Expert Voices, with our teachers. Our experience suggests that, at this point in time, the required login and collaboration patterns are too complex for our users.

For the future, we see at least two critical next steps. First, some notion of community sign-on must become widely adopted in order to facilitate seamless interaction. Unfortunately, the current implementation, Shibboleth, has been slow to catch on, perhaps due to its technical complexity. For example, our own implementation has taken over two years, despite high levels of supports.

Second, future innovations should move along the lines of something like Facebook. Like this popular social networking tool, the technical infrastructure should provide a set of baseline services and data, then allow collections and service providers access to an open architecture in order to build on the top of that baseline. These developments could then more easily be shared within the NSDL community. Definitions of such baseline functionality should be done following a user-centered design approach.

Finally, as NSF funding decreases, the NSDL must continue to grapple with various business models. This is a problem whose complexity cannot be overstated. As has already been done, we must look to the Open Source Software community for interesting examples. Of late, the music industry has also developed successful models. For example, Magnatune.com has developed shareware models for music, which appear to appeal to consumer, musicians, and distributors. It remains to be seen, however, the extent that education can adopt and adapt such for-profit models in supporting a public good.

Acknowledgements

This work was partly supported by grants from the National Science Foundation (DUE 0085855, DUE 0333818, DUE 0434892, DRL 0554440). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We thank the participating teachers and members of the IA research group (Andy Walker, Brooke Robertshaw, Bart Palmer, Kristy Bloxham, Heather Leary).

Comments on this entry are closed.

  • Lois McLean March 8, 2009, 4:19 pm

    Mimi describes the problems of the “wheel reinvention” and “tool silos” that are created when projects with similar functionality and audiences develop in parallel rather than in a complementary fashion. Part of this may be due to the fact, which she points out, that technical standards for the NSDL were not in place when earlier projects such as Instructional Architect began creating tools. Are there some specific approaches that the NSDL should consider to avoid similar problems in the future? For example, could the annual RFP be more explicit in referencing what’s already been developed and suggesting directions for combining related efforts?

  • Lois McLean March 8, 2009, 4:20 pm

    Despite the often-expressed value of the NSDL Annual Meeting, Whiteboard Report, and other community activities, projects can still be unaware of very specific resources, tools, or evaluation approaches that could be adapted for their own purposes. Matchmaking efforts seemed more prominent several years ago, with events such as Tool Time. Should the NSDL again take a more active role in such activities, e.g. by sponsoring workshops for non-pathways groups, such service providers or tool builders?

  • Lois McLean March 8, 2009, 4:20 pm

    Mimi and others have pointed out the critical nature of the NSDL Annual meeting and the value of committees (such as the Evaluation and Impact Standing Committee) in fostering collaboration. In light of the planned shift from committees to work groups, how can collaboration be encouraged and sustained in practical and effective ways?

  • Lois McLean March 8, 2009, 4:21 pm

    Mimi comments that the NSDL continues to be primarily driven by technical concerns, leaving out the voices of the users, especially in the K-12 world. Do you agree? If so, how can the NSDL foster development that acknowledges and acts on the needs of that user base?

  • Kuko Ako March 24, 2009, 11:01 am

    I agree 100% with Mimi’s point (as pointed out by Lois) about the NSDL being driven by technical concerns. While technical elements are an essential component of the NSDL, attention has to shift to the needs of educators who are accessing the online resources and also on quality versus quantity. In these times of increased accountability and high stakes testing, we should try not to lead teachers astray by presenting them with tons of content that is only tangentially or topically related to the big ideas they have to teach. Otherwise they will promptly get out. So an important question is: What objective measures–preferably based on human rather than machine methods–can we present to the user to help them quickly decide whether any given resource is worth even checking out further–that is, likely to be useful to their teaching?