Since the late 1980s growing numbers of mental health professionals and media commentators in Britain and the USA have been concerned with a behaviour labelled as ‘self-harm’, ‘deliberate self-harm’ or ‘self-injury’. It is often seen as a secret or hidden practice, and it is almost always ‘on the rise’, especially among adolescent females. Most commonly it refers to self-cutting or self-burning, performed in order to relieve intolerable emotional tension or numbness.
In contrast, during the 1960s and ‘70s, the term ‘self-harm’ generally referred to somebody ‘crying for help’ by taking an overdose (self-poisoning). Now, it predominantly means regulating emotional tension by self-cutting or -burning. The ratio of cutting to overdosing in hospital statistics hasn’t changed very much, remaining around eight or nine to one in favour of self-poisoners. Why, in such a short space of time, have popular self-harm stereotypes shifted so dramatically?
The first thing to acknowledge in the face of this shift is that self-harm hasn’t always meant what we think it means. In the very recent past in Britain, ‘self-harm’ did not conjure up images of blood and cutting, but medication and overdosing. The ways in which we understand self-harm are both relatively recent and incredibly narrow.
This goes against some ideas of self-harm as timeless and almost mystical, which link it to religious self-flagellation, bloodletting, and even Tibetan tantric practices and the Passion of Christ – all of which focus on, or involve bleeding.
The term ‘deliberate self-harm’ was proposed in 1975 (as a new label) at a hospital in Bristol. It was used to describe a group of patients where 92 per cent had poisoned themselves (mostly with prescription or over the counter medication).
The views, opinions and positions expressed by these authors and blogs are theirs and do not necessarily represent that of the Bioethics Research Library and Kennedy Institute of Ethics or Georgetown University.