A few days ago, I wrote about Brian Frye's article in Techdirt where he discussed ScholarSift, a new platform for legal research. A broader discussion of that platform and my concerns with it are in my earlier post--in short, users can submit an article on ScholarSift (either a draft article of theirs, or a copy of a completed article), and the platform analyzes the text and citations to return "relevant" results. Notably, the platform highlights relevant results that are not included in the citations, which may help direct writers to authors and articles that they may have otherwise missed in their research.
I ended up getting credentials to try out the system and submitted a few articles--both draft articles and completed articles. It was an mixed experience. For the drafts that I submitted, I found a few articles that I had not tracked down in my research thus far. And for some of the articles I tried out, there was a nice diversity of results, although it was sometime difficult to sort through the long list of results that some of the articles generated. While the organization of results was sometime unclear and difficult to sift through, the ability to filter between journal articles, books, and cases was a welcome feature. The basis for the organization remained unclear though--with little more indication of why results were listed in a particular order beyond a vague "relevance" criterion that was measured in unknown degrees.
Some articles I tried out caused the system to turn out some odd, unhelpful results. I submitted one article, a draft paper surveying state self-defense laws and applying philosophical takes on freedom of belief to determine the ideal approach to this area of the law. The draft included a few pages that discussed the phenomena of self-defense in cases where the defendant was trapped in a cycle of domestic abuse and violence prior to killing their domestic partner and whether that history of abuse may factor into the defendant's mindset--a scenario that much of the literature describes as the "battered woman" defense. The "battered woman" phrase seemed to have a disproportionate impact on the results that were generated as nearly all of the articles and cases addressed this phenomenon, even though that discussion was only a small portion of the article itself. I suspect this may have been a result of the contents of the database of articles from which ScholarSift draws its results, or it may have been because the "battered woman" phrase was repeated several times in the text and citations (although the phrase "self-defense" was used even more frequently).
I also submitted my article, Shooting Fish, to see how the platform might respond to an article on an unconventional topic with a wide range of statutory citations. The results brought back a disproportionate number of articles about fishing rights and practices in the context of American Indian tribes. This was, admittedly, an area I did not address in the article. I made the decision not to explore tribal laws partly because I wanted to limit the scope of the article (for the same reason, I relegate my discussion of federal restrictions to under a page). I also am not as familiar with tribal law and was concerned that I would not be able to conduct systematic and thorough research of those laws. While I acknowledge that ScholarSift fulfilled its purpose of identifying an area of the literature that I did not address, it's overwhelming focus on that area of literature ended up crowding out other results that were related to laws and issues that I did address in the article. The platform did generate several relevant results (I was familiar with several of the articles it generated from research I'd done in related areas), but the disproportionate focus on literature related to American Indian laws, treaties, and rights made the results a bit more difficult to navigate.
Other articles I submitted suggested that there's still a way to go with the database. I submitted a draft article I'm writing on trial by combat in American law--the results ended up being all over the map. While trial by combat is not a subject of common discussion in modern legal literature, it is addressed more frequently in history articles. While I occasionally got results for some papers from journals outside the legal field (some different articles I submitted resulted in citations to medical and psychological journal articles), historic literature on trial by combat was conspicuously absent from the results.
Similarly, I submitted an article on pew rights and related legal disputes to stress-test the database's capabilities. The results ended up being as helpful as I could have expected. There was a lot of First Amendment literature in the results that did not really match up, but I was pleasantly surprised by the number of hits for articles discussing intra-church disputes and court treatment of canon law.
I also noticed that I kept having to sign out and log back in after every two or three article searches. This was not a substantial burden, but it made me feel judged. Perhaps I was offending the system with the bizarre articles I was submitting.
From my experience so far, I stand by my the conclusions in my earlier post. I think that ScholarSift is a useful tool to have available for legal research and writing. At this stage, it certainly is not sufficient to serve as the only tool--after all, it is designed to analyzed near-complete drafts to determine what sources and citations are already missing, and authors need to be able to do the research to get to that stage of the draft. I still have qualms with how the platform works--the metrics behind the "relevancy" determinations remain entirely unclear, as do the contents of the database from which the platform draws. If ScholarSift's database continues to expand to older works and articles in non-legal fields, it will be a welcome addition to other research platforms that are currently available.
Labels: criticism , legal writing , scholarship
0 comments:
Post a Comment