Author: Gradeigh

Of Two Minds, One History, and Multiple Addresses

Bitcoin is still a relatively young concept (in economic terms, in technology terms it’s practically a wizened wizard) dating back to 2008. I originally formulated a research question for my adviser’s Research in HCI course at Rutgers University as: “What are the barriers to entry that are stopping widespread adoption of Bitcoin?”. My curiosity is mostly piqued by the high injection of capital into the market, giving it that meteoric high of ~$1000/bitcoin a few years back. There is also the (supposed) advantages of the currency touted by its believers: low-to-zero transaction fees, fast transactions, micropayments, no third parties, etc. Moreover, it is just an unbelievably interesting concept to me. Paper currency is not something that has updated with the times. Though we have electronic payment methods, these are still mostly “offline” solutions and ACH is not something fun to deal with. Bitcoin is the first payment method created for the internet age; its representation is purely online and, indeed, cannot exist without it.

The results (and their corresponding discussion) have emerged in our pre-print paper: Of Two Minds, Multiple Addresses, and One History: Characterizing Opinions, Knowledge, and Perceptions of Bitcoin Across Groups. Note: The preprint seems to have messed up formatting in the Discussion section; the paragraphs used to be more fluidly merged. This is true of the WEIS submission. :/

I proposed to study this question with a dual-group study design; one consisting of those who don’t actively participate in Bitcoin’s eco-system and another that does. Afterwards, as a group (consisting of myself, my adviser, and another student named Xianyi Gao) we drafted together a loose set of questions to be used in semi-structured interviews for these two groups. I then conducted the user studies over the phone, in-person, and on Skype for about two to three weeks with 20 people.

One thing we did not structure well for was how you can ask people why they don’t use Bitcoin when they’ve never heard of it. This is glaringly obvious now, but it went over our heads at the time; as such, the first few interviews with non-user participants were not useful in this regard. So I decided to flip it around midstream and ask them: what aspects of an ideal payment system would appeal to you? If there is a mapping between Bitcoin advantages and their desires, then Bitcoin fits a mental model of theirs.

One of the interesting things that we ran into is that people who use Bitcoin really don’t understand the mechanics of it that well. In this aspect, Bitcoin is very similar to traditional forms of currency; it is not clear exactly to participants how fiat currencies work either. When we questioned about what gives money its value and why do we accept it, many participants couldn’t answer fully or declined to comment. As such, their protestations that they cannot use Bitcoin because they don’t understand it is a deflection. What they more likely mean to say is that they do not use Bitcoin because they do not see a NEED to use it — and this is very true. Paper money is the first currency method people use, it is accepted universally, and has agents to promote its use (the government). Bitcoin comes off more as a interesting-but-limited novelty; one can say it eases transactions and makes online payments fluid and time-independent but this is not something that people think they need. They are used to status quo, because it is not conceivable that there is a problem or even a solution.

And there are many other things, of course, that are discussed in the paper! Suffice it to say, I hope this is not the last time I tinker about with Bitcoin; I have quite a few puns left for paper titles that I need to get out of my system :).

Security Engineering: Introduction and Illustrative Example

My adviser is teaching a new class at my institution on Security Engineering.
It derives a lot of source material from Ross Anderson’s book of the same name. Since the book is dated around 2008, it is missing a lot of material surrounding pervasive computing security and the like so this will be supplemented by relevant conference publications.

This semester, I’m planning on making frequent blog posts about the material learned in this class. A sort of challenge to keep my blog consistently updated and to make myself take better notes.


  • Architecture & landscaping help with physical security; using physical constructs to protect your valuables (i.e. a moat or large windows).
    • Airports have TSA checkpoints.
  • Need to avoid running into snake oil concepts in security.
    •  Examples of Snake oil: “Military grade encryption”, “AES is bad because there is 256bit key but we have 1024bit key”, etc. typical nonsense.
  • Class Exercise: Threat Modeling
    • Threat modeling is the act of analyzing the security of a situation in a systematic fashion using actors, attacks, and defense scenarios.
    • Scenario: prepaid service cards! These can be things like purchasing airtime minutes for AT&T or online play time for Xbox Live.
    • What are the types of threats against prepaid services? To do this, we need to begin looking at the different actors & stakeholders.
    • Actors: Person who buys the prepaid card. Person who sells the prepaid card. Person who manufactures the prepaid card. Person who uses the prepaid card. The provider.
      • System’s goal is to distribute money without having a credit card or billing to a main account. So the idea is that you are avoiding this by maybe purchasing this with cash.
        • Prepaid identifies a subset of people who don’t have credit cards or are unwilling to use it. This information might be sellable by the service toa dvertisers who might want to sell to them.
      • Prepaid cards are one-shot, AND itemized. So one person knows where the card was sold from and who currently owns it. So you can make a connection there between where they go to purchase cards (tenuous, it can be a gift).
      • Prepaid cards can be fraudulent; they could contain absolutely no money at all. So one actor can just be selling a used card without the other person knowing if there is a way to read below the surface of the strip.
      • When you activate the prepaid card on the telephone, it may be possible to trace that transaction information to a specific phone number, compromising user privacy again.
      • Can brute force the card; if there is an algorithm flaw then you can lose the money from it. You could prevent people from using the service properly by shutting it down thus making the card worthless.
      • The manufacturer can be compromised; you can force the distributor at the printing press to print flawed cards or ones with mistakes and pocket the money in the original code yourself.
    • Official Actors:
      • End user
        • Attacks: Fraud -> lie to company that the card didn’t contain any money.
          • Defense: You bought it, you lost it. Its your fault. We have no obligation to you.
            • Banks call it ID theft instead of fraud for a legal liability reason. They make it your problem, not theirs.
        • Attacks: Lost it -> sell it -> plan to use later.
          • Defense: You bought it, you lost it. Its your fault. We have no obligation to you.
  • Service provider
  • Retailer provider/store/cashier
    • Attack: They don’t activate the prepaid card when its purchased.
      • Defense: Point of sale. You can’t ring up the card for sale without activating it at the same time.
      • Defense: Receipt needs to say that a card has bene sold and it has been activated.
    • Attack: Read token/copy the code.
      • Defense: The code can’t be scratched off, but this prevents weakly.
      • Defense: Tamper resistant enclosure. You want to prevent people messing with cards inside large boxes that can be actiovated en masse.
    • Attack: Using the token attack?
    • Attack: Fraud – selling false cards
      • Defense:Use holographic/physical design just like with money to prevent being sold false goods.
      • Defense: Buy from a brick and mortar or a trusted brand place where fraud is less likely (like Amazon or eBay).
  • Whole sale provider/warehouse
  • Gift giver
  • Billing/authorization
  • Manufacturer
  • Hacker
    • Attack: Guessing/brute forcing PIN codes.
      • Defense: Simple database of used PINs.
      • Defense: Difficult to reverse-engineer codes with long strings that are difficult to figure out. A huge password space without many of the possible passwords being used.
      • Defense: Track people who have multiple failed attempts and ban their information from the service? Or employ throttling.
  • Petty thief
  • Counterfeiter
  • Copier
    • Is this all of the actors?



NSA’s SPECK Block Cipher in Java

Edit: Updated to work with BigIntegers of arbitrary bit size.

I was going about, trying to run my own timing analysis of NSA’s new block ciphers — SIMON and SPECK — and I couldn’t find any easy Java implementations for them. I had thought that by this time that Bouncycastle would have added it to their library but alas they did not. I went ahead to implement myself and was quickly reminded of how different things are when you work at the bit level!

My main trouble here was: 1) properly handling the bit operations, 2) my Java 7’s lack of unsigned integers!!! (fixed in Java 8, oh how far we have come~).

First, the implementation is based off of: , the original paper.

Start with the definitions:

So that’s the standard stuff.
Move to a constructor. Note that there are standard sizes seen above as comments; the checking for this hasn’t been programmed but it seems like a couple of switch statements. I left it up to the user to decide because lazy.

Constructor! Pretty straightfoward. Note that there are only two pairs of values for the shifting constants, alpha and beta.
I lazily constructed a large array for l, but if there are memory concerns I wrote a second way to do it. The values aren’t reused; you could just store as little as 3 values and get away with it! 😀

Key expansion! This is shown in two ways in the paper: inside and outside the algorithm. I opted for outside. The round function uses circular shift functions. We’ll get into what unsign and the cyclic shifting functions are further down.

The encryption function. Implemented exactly as seen in the paper.

Decryption! In the paper, they do x first and y second when describing the inverse. I find it’s easier to do something like this instead:
1. Do the reassignment for y first.
2. Follow through with x.

So reversing the encryption gives:

x = (rightShift(x,alpha) + y) ^ k[i];
x ^ k[i] = rightShift(x,alpha) + y
(x ^ k[i]) – y = rightShift(x,alpha)
leftShift((x ^ k[i]) – y,alpha) = x

which is what we see up there.

OK, unsign, leftCyclicalShift, and rightCyclicalShift. This is explained in the comments but I’ll reiterate here:
unsign: This is needed because Java doesn’t have unsigned integers. Doing a logical AND with 11111…., the highest representation of the BigInteger, will reverse the value back from negative to positive. This needs to be done after rotations and subtractions.

leftShift & rightShift — I got into a lot of trouble using Integer.rotateLeft and Integer.rotateRight from the JDK prior to switching off to BigInteger. The problem is the number of bits in the specified word is not the same as an integer — you have to rotate on the number of bits you’re USING in your word size, not the number of bits in the integer. This problem exists doubly in BigInteger; the data size is infinite so doing rotations is tricky. You need to eliminate anything larger than your thing and move it over.

So that’s it.
I’ll probably do a simple SIMON implementation in Java as well in the coming days.


For the last few months, I’ve been very focused on trying to submit successful fellowship applications. My main targets are ones that are big in computer science & computer engineering: Microsoft Research, Facebook( not done yet), NSF, NDSEG (not done yet), … etc.

It has been quite exhausting, to say the least. Writing a fellowship application that mandates a personal statement is a mentally taxing experience because you are being forced to constantly sell yourself at every point along the way. I’ve always found this a little difficult because I’m the type of person who tends to avoid overestimating my ability to force myself to keep working. A bit of mental trickery, if you would; as such, I’ll often downplay praise or avoid the spotlight to avoid a growing ego. The goal of the statements isn’t to encourage that sentiment, of course, but to make you create a three-dimensional version of yourself from your written word. But that’s certainly difficult for me because I hate to sound like I’m bragging while understanding that, yes, you must sell yourself or be doomed to go home hat in hand.

The research statements are quite easy since I’ve been working at this for a while and generally have a good grasp of what it is I’m trying to accomplish, what I can do in ‘x’ amount of time, and why the problem is worth solving (hint: all research problems are worth solving with the right marketing campaign).

So far, I’ve done:

  • Microsoft Research
  • Qualcomm Innovation Fellowship

In that order. The MSR one was a good preparation for the GRFP since it forced me to get a research plan draft done very early that was in excess of the GRFP requirements (5 pages versus 2 pages). So most of my work went into advertising myself in the research statement :D.

Qualcomm was a little different. This fellowship is only open to certain universities that Qualcomm gets a lot of work from. WINLAB, certainly, does a lot of work with them on the Communications front; Dr. Lu in our solid state department works on hardware with them. The research proposal required a team component, so I worked with Xianyi Gao in my lab and produced a research plan on crowdsourcing that I’m hoping will appeal to the team over there given that Qualcomm had a high profile purchase of a crowdsourcing platform last year. The work is fairly novel and would answer some interesting questions so fingers crossed to at least be a finalist!

The next fellowship coming up is the Ford Foundation. This is certainly a prestigious fellowship as well, but their mission statement is about increasing diversity in higher education. From what I’ve seen of previous winners, not many were caucasians studying computer science topics like I am. 😀 I am not dissuaded, however, since my commitment to increasing diversity is what matters and I am actually proactive in this area. I have to spin a nice story with the research plan and make it more human and relatable, which is actually a great thing to be doing anyway! Really looking forward to this. 🙂

SOUPS Roundup – Privacy Preferences, Authentication Aspects, and Social Security

I had originally intended posting blog updates on my 3 days spent at SOUPS 2014, but with research/teaching/my own classes/startup(!?) work getting in the way I figured I’ll just fold all of that into this post. Since I don’t remember all the twists and turns of my SOUPS experience, I’ll leave this as my selection of papers I found interesting (based on my notes).

Here’s a list of the SOUPS papers I still want to read more in-depth:
It’s a Hard Lock Life: A Field Study of Smartphone (Un)Locking Behavior and Risk Perception
Password Activity Lifecycle
Privacy Attitudes of Mechanical Turk Workers and the U.S. Public (I was left feeling wanting during the presentation of this paper … lots of odd, unanswered questions were in my mind. Reading this may answer them???)
Behavioral Experiments Exploring Victims’ Response to Cyber-based Financial Fraud and Identity Theft Scenario Simulations

Moving on to the ones I did listen to AND read (:D):
Would a privacy fundamentalist sell their DNA for $1000… if nothing bad happened as a result? The Westin categories, behavioral intentions, and consequences ( Best Paper )
Allison Woodruff, Vasyl Pihur, Sunny Consolvo, Lauren Schmidt, Laura Brandimarte, Alessandro Acquisti

This paper was an in-depth exploration of Westin’s Privacy Segmentation Index as it applies to behavioral intent and user consequences. The authors explored whether there was a correlation to users segmentation into Westin’s privacy groups (fundamentalists, pragmatists, unconcerned) and their actions & behaviors. Of course, the lack of correlation of contextual responses between privacy groups is not in-and-of itself a novel result (by 2014’s standards), the consequences analysis is novel. The authors performed a large scale MTurk study involving first segmenting the users into Westin’s groups and then providing the Turkers with situations with which the privacy implications and outcomes vary and checking how the users respond (would a fundamentalist object more strongly if their image is disseminated on the net?). Of course, this is known as the privacy paradox, wherein users attitudes about privacy clash with their actions regarding it. THere are assumed reasons for why this is the case: Westin’s PI is about general attitudes, not context-specific cases; users might compromise their privacy concerns under contexts for the matters of convenience, trust, or profit (this ties to other aspects of human psychology I’ve noticed in currency/finance studies; people do not pursue the best financial decisions when there is some emotional motivator at play). The authors conducted a two phase study: Phase 1 involved survey of privacy attitudes involving 4 different privacy scales, personal information misuse questions, and personality/demographic characteristics; Phase II involved asking what these now-segmented people would do under privacy-compromising situations. The scenario that the title relates to is:

‘A marketing company offers you $1000 and free genetic testing in exchange for the rights to all your current and future medical records. They will have the right to resell or publish your data (anonymously or with information that could identify you, at their discretion)’.

There are 20 of these scenarios in total, relating to many different fields beyond health (social, finance, etc). Results involved: suggested improvements to Westin’s segmentation (which didn’t work too well, by their own admission. Too bad, really!), effect of brand manipulations on privacy concerns (meaning: people trust Amazon, Google, etc. more with their information than, and predictors for disclosure (is there some combination of known variables that will work with these privacy segmentations to figure out if someone will give their DNA away on the internet!? The answer: Sorta; not exactly.). Overall some cool stuff, certainly a shoe-in for the award it got given the quality research, work, and writing done by the authors. I, of course, am always left wanting for perfect endings and I didn’t get that here like I felt I did when I saw Gone Girl (ah, but that is a different blog post….).

Towards Continuous and Passive Authentication via Touch Biometrics: An Experimental Study on Smartphones
Hui Xu, Yangfang Zhou, Michael R. Lyu
This is a paper about a continuous authentication method for smartphones. I didn’t feel the concept of continuous authentication in a paper was novel enough in its own right. Is this because I went to the WAY Workshop at SOUPS!? Answer: No, because continuous authentication has been done a bit before. Plus, I recently tore up a paper in review at another venue for some fairly lazy continuous authentication work. Actually, the contribution of this paper is the implementation of the continuous authentication method via a 30-person user study. Let’s dig in a bit more.
The paper goes into using biometric characteristics of stroke dynamics and the like. They separate user operations into: keystrokes, slides, pinch, and handwriting. They programmed an application to put their users through a training phase and asked them to perform tasks. In total: 32 people recruited with the singular goal of collecting training data on an Android device. I would classify the chief contributions of this paper to be analyzing the EER effects of the behavioral biometrics of: keystroke, slide, handwriting, and pinch. I was expecting a real system to be implemented and tested on users based on my reading of the abstract but didn’t get that.

One of my peeves about this paper is that the writing really isn’t up to the standards I normally hold from conference papers. I forgive the authors some because they are from international institutions and may lack people who have english as a first language, but a language mistake in the abstract is hard for me to get over. This goes back to my early college years as an English major, I think. 🙁 Another objection is the idea that, for smartphones, we have a multi-class classification problem; these phones tend to belong to just one user so it should just be one-versus-all. Of course, multi-class is when these authentication models start to fall apart and that’s when things really get interesting … how do you avoid collisions? Furthermore, I found the ‘month-long’ description there to be misleading … I thought 30 users were authenticating on this thing for a month and giving their feedback but that wasn’t the case at all. That was something I was really interested in. I’m actually surprised these metrics aren’t combined to authenticate the user; it’s only mentioned off-handedly. There’s also no true “attacker” in the sense of having participants deliberately try to mime another person … another thing I dislike about some authentication papers. This isn’t a bad paper by any sense of the word … I just expected more and didn’t get what I came for. :/

The Effect of Social Influence on Security Sensitivity
Sauvik Das, Tiffany Hyun-Jin Kim, Laura A. Dabbish, Jason I. Hong
This is an interview study designed at understanding why the public doesn’t adhere to suggested privacy/security instructions from experts & researcher nor use their tools. Results indicate that social processes play a role in influencing people’s behaviors concerning privacy and security. The crux here is that security tools need to be visible and apparent to the users and their role needs to be well-understood before being used. Not a lot to say here, the work is really quite good. This is a useful paper to cite when doing work on security preferences and users. 😀

Stray Observations: Media

I’ve been trying to cobble together a few blog posts in my mind about a variety of topics but never seem to find time to develop a full, detailed post about them.
So I’ll just throw together some musings in one post and justify an update. 😀

I very strongly hold the opinion that we live in a golden age of entertainment. The advent of cable television has given rise to programming with grand, sweeping aspirations that is endlessly fascinating. Often, new cable series don’t just feel fresh but are endlessly self-referential and enjoyed on multiple levels. Attention to detail is literally astounding. A simple example of this is what is likely my favorite show, Breaking Bad, where I recently noticed that any time Walter White is operating from his Heisenberg mindset in the early seasons he is almost always viewed through reflective surfaces. A simple example of this that I can remember is the first meeting with Gustavo Fringe at Los Pollos Hermanos in Season 2. This was something I didn’t know about the show until after it ended, which fuels my desire for rewatching.

Every type of genre is seeing a renaissance on cable television. Case in point: the mundane cop show. Network television has been saturated with many terrible procedural dramas with no overarcing theme or story; they only exist to be sold in syndication and have no running plotlines so the show can be picked up and abandoned at the start of a dedicated programming hour. There are endless examples of these dull, plodding shows: Law & Order, Blue Bloods, Grimm, NCIS, … repeat ad nauseum. It is extremely irritating that we still live in a world where [Insert Thing Here] of the Week occupies a half hour on television. There’s no real audience involvement; no expectation of a growing story with evolving characters and elevated circumstances … just repetitive drivel. One could argue this is an allegory on life — that we can’t expect weekly excitement and often things are the same week to week. I disagree; when you can literally interchange episodes and not miss a single beat about what’s going on then it is not a commentary on life. I could not exchange weeks of my life and expect everything to line up, and that’s true for most anyone. That’s why True Detective was such a refreshing cleanser for the procedural palate. This was a deep, complicated, and multilayered cop drama that extended way above all the network rubbish to tell the story of both a meaningful hunt for a killer and the devolution of its two main leads. Truly inspiring.

Even the soap opera has renewed vitality on cable. The titillatingly titled Masters of Sex is one of the most pleasant viewing surprises I’ve had in recent memory. I went in expecting absolutely nothing and got really layered, attention grabbing detail and development of the lives of these characters in cloistered 1960s society. Will Masters continues to be one of my favorite leads on current television, played awesomely by Michael Sheen (whom I loved in Frost/Nixon). Lizzy Caplan is similarly a delight to watch on television, and their exchanges are endlessly interesting. What I find most impressive is the fact that, after some reflection, what I was effectively watching was a highbrow soap opera. And I’ll be honest: I still really enjoy it even after the fact. I suppose this feels as new to me as soap operas did to housewives way back in the ’60s. 😀

Participant Interviews

For a study I’m working on, I needed to recruit 128 participants and meet with them to generate data. While I would describe the overall experience to be a positive one — interacting with people is never a bad thing, really — there are certainly some issues I faced.

First of all, the experiment protocol and script really needs to be hammered down before starting sessions. This was really critical; I thought I had the protocol down pat. Turns out the opposite was true; I found myself adding more clarifying statements to my participants than I had realized I needed before starting the interviews. Additionally, I tweaked little things that I didn’t realize beforehand that not explicitly specifying would cause issues. A simple example — I didn’t notice that I hadn’t specified the amount of mental rotations I was asking my participants to perform. The whole packet seemed unnecessary so for completeness I ended up asking for one page. Point being that the devil really is in the details — these fine-grained, granular details make a big difference.

Secondly, patience is a virtue. A major one. When dealing with human subjects, it’s important to always be “on” — to always come across as eager, natural, and understanding. This is, of course, because you’re asking for their help — Hey! Come in here and do stuff for us! — but there’s also a larger reason why you should be this way … you want to engage your participants in the work they’re doing. Acting sour, tired, or being short with your participants tends to induce a mirroring effect wherein they mimic the the personality traits of the experimenter and that will bleed into the data they’re generating. Not good! An engaged participant will generate true and honest data, the type of data that leads to real results and (hopefully) that lofty height: statistical significance. Tired participants are less inclined to do so; they will follow the path of least resistance which allows them to leave early.

Finally, you need to leave time to do the other things you have to do in life. I pursued the data for my study aggressively, seeing ~128 people in a 2 week period (10 days). And then again the following weeks after (two-session structure). This left me often exhausted seeing people from 10AM to 10PM at night with no (planned) breaks inbetween. “Hard working”, maybe, but not good for mental health. In the future, I’m resolving to limit those hours. :/

Participant Recruitment

One of the papers I’m writing currently for CHI 2014 requires collecting data from human subjects. Without saying exactly what we are trying to do, I will say that to obtain (a-priori) statistical power requires a participant pool of 128 people with 32 people to a group (so, four groups). We finalized the protocol for the study around the end of August and began subject recruitment that same week. Since the Fall semester for my institution is the first week of September this ended up being quite apropos. It would have been difficult to assemble wholly 128 people from Central NJ to come to my laboratory on Busch Campus by using the general population even if I had the whole summer to do it. The influx of students is a great boon to finishing the recruitment quickly since my institution boasts some >50,000 undergraduates alone!

Regardless, it’s evident to me that the method of posting flyers around campus is not the most effective tool to recruit participants. It strikes me as being antiquated and outdated — the idea of going around and posting flyers is a method that pre-dates the internet. If it wasn’t made for the internet, that bastion of communication, then it is typically ineffective for reaching a wide audience. Indeed, the flyer itself is asking people who read it to use the internet in the first place by sending an email.

Which led me to think about what the space of recruitment methods might be.

  1. Flyers. Tried, tested, and maybe true, this method is the most common way to recruit participants. Stick the flyer up in high traffic areas with an email address on it and various tear-out slips and then hope people get interested.
  2. Fixed installation. Grab a table, place it in a high traffic area, and try to actively recruit participants with some flashy advertorials or by haranguing them with a siren call about compensation when they pass by.
  3. Placing advertisements/flyers directly in areas that force interaction. In other words, place flyers or ads where a person would have to at least read/interact with the ad in order to resume what they were trying to do. This means placing flyers on windshields of cars or sticking them in lockers. Probably the most aggressive/invasive method of recruitment and only recommended if you need a huge amount of participants and don’t care about making people angry.
  4. Online recruitment methods. With these, you have the potential to cast a wide net but it might not land on the people who are most conveniently able to reach your location and as a researcher you don’t have a time to come and meet them.
    1. Facebook ads.
    2. Google ads.
    3. Mailing lists.
    4. Reddit ads.
    5. Craigslist
  5. Targeted advertising. Instead of posting flyers in one place, put up a large ad in a place where people will run into it. For example, I could go to lecture halls at my university and write down the study details on one of the N chalkboards available. Alternatively: campus newspapers.

Haven’t figured out anymore than those but there could be others. Generally, I’ve had the most success with flyers and online recruitment (mailing lists, Reddit). I only had the energy to put up flyers once and some of the other methods would take too much time. I’ve found really good success with mailing lists in general.

Finally, the way to really hook people beyond getting the message out is to have good incentives. Of course, this is always subject to the amount of money in the grants funding the research project. In our study, we require people to return for a second session which can make it even more difficult to get participants. Even with a small monetary amount ($10), you can still get people to show up for a single session. When dealing with more than one session, it’s difficult. Right now, we’re using $30 to bring participants in for a dual session structure and only awarding compensation at the end. ISaying you’ll have food would be a nice way to bring in participants but can distract from the task at hand, especially if you need them to be in-and-out in a tight time period.

TA/GA Grievance Committee

The agreement between the labor union that represents the teaching assistants, graduate assistants, and faculty at Rutgers University stipulates that there needs to be a protocol to follow when teaching assistants and graduate assistants take issue with reappointment procedures. This is outlined in Article XI of the agreement, “TEACHING ASSISTANT/GRADUATE ASSISTANT PERSONNEL GRIEVANCE PROCEDURE”. At any point when a TA or GA is not reappointed, they can complain to a department representative. If they do not feel the representative has been able to sufficiently resolve it (which, I imagine, would only be in the case when there is a simple filing error and not a flat-out rejection of reappointment), then the aggrieved party would submit a formal request in writing to a committee charged for this purpose on the campus. In this instance, this committee is called the “TA/GA Grievance Committee” and is a necessary step toward to resolution of a reappointment dispute.

The committee needs to convene and make a decision about whether or not to support or deny the claim from the spurned TA/GA. If necessary (meaning if someone asks), a meeting will be held with the TA/GA to hear the case in full. Afterwards, the committee has ~20 days to render a decision about whether to reject or accept the TA/GA’s request. At that point, it is handed up to the Dean of the Graduate School to make a final decision based on their opinion of the matter and the opinion of the committee.

The TA/GA committee is selected by the EVPAA in the New Brunswick campus of Rutgers University and consists of of 3 faculty members and 2 TA/GA members. The pool is selected from the nominees of department chairs whom select internally from their own department one faculty and one TA/GA.

My department asked if I would be willing to serve on the committee and I said I would. Honestly, it is something that interests me; it is useful to see how TA/GA disputes might play out in the wild and I am vested in making sure that graduate students get the right treatment. Maybe it isn’t proper to say this, but graduate students are effectively a “vulnerable” population; we are entirely at the whim of our department and, even more so, our academic adviser.

The adviser holds so much power in a relationship that everything is tipped in their favor and they can get basically any outcome they want regarding their student. If for whatever reason the student and them part ways, the adviser can entirely submarine him or her without having to back it up. As such, being on a committee like this would allow me to bear more witness to that. My relationship with my adviser is not like this at all and I don’t expect it to be, but I’ve borne witness to bad situations with fellow graduate students and their advisers enough to know that I would like to help if I can (AND, most importantly, if it is appropriate!) The graduate student is not always in the right, but I am merely saying that they are at a disadvantage >80% of the time). But I suppose a further expose on a subject like this warrants a different blog post so I digress.

So two issues remain, I guess:
1) The Dean doesn’t have to follow the recommendation of the committee. Not knowing anyone else who has served on this, faculty or otherwise, I cannot say what the usual outcome is. I imagine the Dean will select whatever option they want or best serves the university. I honestly feel that a graduate student appealing a reappointment is forever at a disadvantage and I would be surprised if they ever got reappointed. And then, I sincerely doubt they would be appointed again after that. Most likely they are persona non grata in their department after such a kabuki show.

2) I am not yet on the committee, I would need to be selected from the pool of applicants. Maybe if I just contact the EVPAA directly expressing interest to be on it, they would place me on it after keeling over in shock that someone forcefully volunteered to be on a committee that may only exist to placate a scorned graduate student before their final beheading (academically speaking).

It is interesting to be nominated, though! I consider it a great honor and would happily engage in committee activities. It’s a nice outlet from research and good practice for if I ever become faculty somewhere else, where committee membership is a more important credential of keeping your job than research.

SOUPS – Day 1 – WAY Workshop

I need to do more writing on this. I have to cover all 3 days and it is time consuming to do. Soon! Soon! All of SOUPS will be up here…


My overall impressions for the WAY Workshop are that it was: 1) very interesting and 2) seemingly relevant. There were presentations on many diverse topics of authentication, so kudos to the workshop chairs Larry Koved and Elizabeth Stobert for doing a good job on that.