[Humanist] 24.355 programming for us

Humanist Discussion Group willard.mccarty at mccarty.org.uk
Thu Sep 23 22:11:06 CEST 2010


                 Humanist Discussion Group, Vol. 24, No. 355.
         Centre for Computing in the Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist at lists.digitalhumanities.org

  [1]   From:    jeremy hunsinger <jhuns at vt.edu>                            (9)
        Subject: Re: [Humanist] Programming for us

  [2]   From:    Simone Hutchinson <simone.hutchinson at gmail.com>          (129)
        Subject: Re: [Humanist] Programming for us

  [3]   From:    Timothy Hill <timothy.d.hill at gmail.com>                   (48)
        Subject: Re: [Humanist] Programming for us

  [4]   From:    Shawn Graham <Shawn_Graham at carleton.ca>                   (83)
        Subject: Re: [Humanist] Programming for us


--[1]------------------------------------------------------------------------
        Date: Wed, 22 Sep 2010 20:02:20 -0400
        From: jeremy hunsinger <jhuns at vt.edu>
        Subject: Re: [Humanist] Programming for us
        In-Reply-To: <1ADC67C5-3442-4558-A619-46A43DFC3D56 at mccarty.org.uk>

> 
> 
> Second, how in general does one go about programming these days?
http://hacketyhack.heroku.com/  if you want to start

and yes, there are normal practices and tools , but they vary somewhat on OS... though not for open source so much, then it just varies on school.  

Jeremy Hunsinger
Center for Digital Discourse and Culture
Virginia Tech

Words are things; and a small drop of ink, falling like dew upon a thought, produces that which makes thousands, perhaps millions, think. --Byron



--[2]------------------------------------------------------------------------
        Date: Thu, 23 Sep 2010 09:57:22 +0100
        From: Simone Hutchinson <simone.hutchinson at gmail.com>
        Subject: Re: [Humanist] Programming for us
        In-Reply-To: <1ADC67C5-3442-4558-A619-46A43DFC3D56 at mccarty.org.uk>

I have carried out two front-end contracts since ceasing web
development professionally in 2007 and for these two jobs I felt as
out-of-date as you presume your programming experience is... There's
no doubting the speed at which programming languages continue to rush
on.

To reply to your specific questions, I need to state that I worked
firstly as a database developer using an open source environment and
tools (Linux & PHP, SQL, PostgreSQL) creating e-commerce websites.
Soon after this experience, I moved into interface programming
instead, and had some design training along the way. I was using
Microsoft Windows and Apple Mac environments with a mixture of
Microsoft programming tools and open source ones. Using these skills,
I became a staunch supporter of the accessbility and web standards
"movements", and ended up by 2007 in designing user testing and
project management (small scale though).

I didn't get the chance to work with the iPad. However, I doubt that
the iPad provides a good working environment simply because of it's
size and keyboard: I expect any programmer would soon suffer painful
RSI if they had a go at programming a website or database on it.

In my experience, there were in-house specific ways of carrying out
different projects. But the golden rule in every place I worked was:
use a pencil and piece of paper first. All projects were informally
modelled on paper by the task performer, unless that person didn't
like to do that (rare!). Database tables of course were mapped out and
relational database theory was the mother 'language' we database
developers spoke in. This allowed us to generate a list of tasks. Each
task was approached one by one. I used what we call pseudo code to map
out the sub-tasks: pseudo code is literally plain English (or whatever
language is spoken) describing what actions need to be performed by
the programming language. I suppose it's not dissimilar to preparing a
dissertation in English literature... ! Once the pseudo code is
satisfactory and checked over, the programmer can use it as the task
list as he or she starts the actual programming.

Where I worked, there were in-house preferences as to how to write
PHP. Certain functions would be avoided, others made use of in perhaps
idiosyncratic ways. ! The reasons for such preferences being always:
optimal performance combined with best use of the programming
language. i.e. there were ethics in programming that might seem purely
(merely) semantic but in fact reflected upon our company's approach to
computing services too. Like I mentioned, we used open source
materials and aimed to be clear, effective and intelligent with our
programming.

When I moved into interface programming, similar methodological
standards prevailed. However, because of the notorious problems
encountered by web developers caused by the Browser Wars, such
methodological standards were continually reassessed by intelligent,
progressive people and also by some less ethically-minded ones. I had
to use code cheats to workaround some of the browser war problems.
Then, as I became more experienced, and at the same time as the HTML
and CSS specifications adapted to solve some of these problems, I was
able to use code that was no longer a blatant cheat. But rather, a
suspect redundancy...

However, times moved quickly, and I soon was able to code an interface
without taking any recourse to cheats. I was able to use strictly
'ethical' HTML and CSS and still avoid any browser-associated
problems. I emphasise ethical because by this point, I remember things
started to become personal! People would comment on well-known
programmers' blogs about their choice of HTML specification: the
arguments for and against using the STRICT document type definition
were heated. There are other examples I could dig up.

There were technical limitations imposed on me as a programmer which I
took for granted, yes: off the top of my head, I would name: using
Windows or Mac; then which language you choose for a job - PHP or ASP?
Each having different benefits and disadvantages. The project brief
itself defines the tasks; this brief is defined by the client and in
most projects, I was but a cog in a machine. I had control only over
the methodology.

So far I have only discussed those issues which you ask about, but
which are only related to the programmer. What about the user? What
kind of user would be interested in the quasi-philosophical,
quasi-political issues I briefly refer to above? I think the above
deserve - to borrow your cited word - erudite attention, from the
academic communities, but I doubt they will retain the interest of a
lay computer user.

What I found particularly interesting, and indeed rousing, about
working as a developer was how there was no imposition of law about
what standards to use: instead, a self-regulating community of
developers across the world co-operated, adapting to changes in
technology and user behaviour. Some programmers and bloggers became
famous amongst coding crowds; small hegemonies dominated and defined a
status quo of best practice. However, this best practice was always
under critique by programmers and even by those who had been part of
defining such. The freedom provided by the internet to express and
foster discursive communities was very important. It seems now,
looking back, that there was a degree of innate altruism and
co-operation amongst the programmers. They were working to achieve a
common goal, which I hazard to say was indeed about 'data': to
manipulate and model it, to create new forms of it that better serve
some predefined goals. these predefined goals can be banal - e.g. the
user wants to subscribe to a newsletter and have a personal profile
page where they change their preferences, or, these goals can be more
significant - e.g. a visually impaired person wants to navigate a
website and access the same information as a sighted person. Indeed, I
have not discussed at all the user modelling which takes place during
a project's initial phase: the use case scenario is very important.
Trying to work out user behaviour and then define that behaviour in
order to let it direct the definition of tasks requires a bit more
room than I've given it here.

I wonder what has become of the blogs and crowds I used to follow... I
have since returned to university as a postgraduate in English
literature.

I hope this is helpful in some way and I apologise for the length of
my response.

Best wishes,
Simone Hutchinson

On 23 September 2010 00:41, Willard McCarty
<willard.mccarty at mccarty.org.uk> wrote:
> The following questions may seem rather naive. I left programming so long ago that the languages I wrote in are likely to be barely recognizable by name. And the programming practices (those we were taught as canonical) are even less likely to be familiar.
>
> First, what is it like to work in an environment such as is provided by the iPad or similar device? How does one think about the task in hand as the task-performer conceives it? What does one take for granted? What are the limitations imposed by the system, and how does one get around them or use them to reshape the task?
>
> Second, how in general does one go about programming these days? Are there more or less standard ways of working? How much of what one can say on this topic would be of benefit if communicated to those who use our tools and do not write programs? Are there good books on "algorithmic thinking" (as a colleague of mine calls it)? How would you like to see the world educated as to what you do?
>
> Morgan Tamplin, a colleague at Trent University (Canada), once wisely formulated the key to understanding what our machines are for as learning to see one's object of study "as data" -- or, I like to say, "as if it were only data". The American anthropologist Pascal Boyer speaks of different modes of reasoning, one of them "scientific", the other "erudite" (as in the humanities), best if they cohabit in the same mind. Tamplin's "as data" and the humanist scholar's native mode likewise.
>
> Comments?
>
> Yours,
> WM
>
> -----
> Professor Willard McCarty
> staff.cch.kcl.ac.uk/~wmccarty/
> _______________________________________________
> List posts to: humanist at lists.digitalhumanities.org
> List info and archives at at: http://digitalhumanities.org/humanist
> Listmember interface at: http://digitalhumanities.org/humanist/Restricted/listmember_interface.php
> Subscribe at: http://www.digitalhumanities.org/humanist/membership_form.php
>



--[3]------------------------------------------------------------------------
        Date: Thu, 23 Sep 2010 11:18:14 +0100
        From: Timothy Hill <timothy.d.hill at gmail.com>
        Subject: Re: [Humanist] Programming for us
        In-Reply-To: <1ADC67C5-3442-4558-A619-46A43DFC3D56 at mccarty.org.uk>


This won't be the most profound reply to the questions you raise, as
I'm writing from the other side of the divide: with no real
programming experience prior to 2002, I can't really conceptualise
what the world was like before object-oriented approaches became the
entrenched orthodoxy.

This being the case, I'm struck by your references here - and in an
earlier thread, concerning the extent to which humanities scholars
ought to be programmers - to 'algorithmic' approaches and styles of
thinking. While obviously programming is fundamentally algorithmic,
and objects and methods are in a sense just wrappers around
algorithms, this is never the level of abstraction I start from when
thinking about software design. I start by thinking about what objects
the problem domain contains and the possible interactions amongst
them: the more-explicitly algorithmic stuff I leave until much later
in the process. Often I feel the most helpful medium for communication
with the academics on whose projects I work would be some kind of UML
entity-relationship diagram outlining how they conceptualise their
field. If we agree on how the domain is to be modelled, questions of
implementation and coding can be left to the developers, without the
need for the academics to think about anything more 'computational'
than this formalised model of their domain.

As for the question about iPad (>ahem<, inter alia) development, I
can't offer many specific insights, as I'm only starting to explore
mobile dev. But speaking in generalities, I'd note that they share in
the general trend of moving developers further and further away from
the iron: indeed, the only reason I'm now contemplating a bit of
iPad/Android development is that mobile dev is no longer a question of
hacking around in the lower levels of device operating systems. Most
development now occurs way on top of a vertiginous tower of layers,
beneath which the chip and the fundamentals of the system are pretty
much buried.

The result is that the bulk of development work is now a much more
"cultured" activity than it probably was, say, twenty years ago:
typically, the learning process is that of familiarising yourself with
the conventions used by framework architects, rather than butting your
head against the limitations of hardware. The legendary cold, hard
logic of the machine is overlaid by several layers of thoroughly human
constructs, some of which - particularly during debugging - can appear
thoroughly arbitrary.

Admittedly this is a tendency that ebbs and flows. After years of
browsers being the primary development target, mobile devices have
shifted attention back to the OS, eliminating one huge layer-stack.
But over all, I'd say the contemporary developer operates at a
higher level, and in terrain arguably more congenial to the humanist,
than did those of earlier generations.

Timothy Hill
Centre for Computing in the Humanities
King's College London



--[4]------------------------------------------------------------------------
        Date: Thu, 23 Sep 2010 09:37:25 -0400
        From: Shawn Graham <Shawn_Graham at carleton.ca>
        Subject: Re: [Humanist] Programming for us
        In-Reply-To: <1ADC67C5-3442-4558-A619-46A43DFC3D56 at mccarty.org.uk>

  Interesting questions, Willard.

Regarding programming - My interest at the moment is in agent based 
modelling, or simulation, to explore archaeological & historical 
questions. An ABM is an environment where you specify the behavior of a 
single individual, and then create hundreds of such individuals who are 
heterogeneous in their individual characteristics. You let them interact 
playing by the rules, and see what emerges. The key there is specifying 
the individual behavior - that's the model, derived from some phenomenon 
you've observed out there in the wild. I regard programming & model 
building as a way of formalizing explicitly what it is I think about a 
particular phenomenon - and then I build a model that targets the 
simplest expression or analogue of that phenomenon. I try to keep it 
simple & stupid.

I use the Netlogo environment for this - 
http://ccl.northwestern.edu/netlogo/ which started out life as a 
programming environment to help high school students understand complex 
environments. It's an interpreted language, so the actual code used is 
rather 'near English'. A model studying how information diffuses along 
Roman road ways might create a population of agents who travel up and 
down those road ways. The key mechanism could be to pass a message 
along, so the code might look like:

to have-you-heard-the-news
      if anybody-here with heard-the-news=0
         set heard-the-news of anybody-here=1
end

'anybody-here' would be a variable of the local environment telling the 
agent the ids of everyone around it, and 'heard-the-news' is a yes/no 
variable of each agent.

...and then you watch how long it takes for everyone in the model 
population to hear the news, or you could observe where the roads (like 
at intersections or fora) enable jumps in the numbers of people who can 
hear-the-news, or extract other information - do particular patterns of 
roadways enable greater/lesser information diffusion?  Then you look 
back at the history of road building in that part of the world, and 
perhaps tie the model results to Ray Laurence's observations about 
'space-economy' or other ideas about Romanization...

So netlogo is an excellent tool for this mode of investigation, and it's 
also very good for non-programmers to be able to look at the 'procedural 
rhetorics' of the simulation and understand what's going on - it 
mitigates the black box syndrome, to a degree (a problem with *any* kind 
of digital representation - code is a kind of rhetoric, and needs to be 
interrogated. The more transparent we are, the better...?)

I'd recommend Nigel Gilbert and Klaus G. Troitzsch's 'Simulation for the 
Social Scientist, 2nd Ed' Open UP, 2005 especially chapters 8 and 9 
where they explore agent models specifically and take the reader through 
the creation of a model; also the Journal of Artificial Societies and 
Social Simulation http://jasss.soc.surrey.ac.uk/JASSS.html

If anyone is interested in seeing some of my archaeological ABMs in 
action, these live at www.graeworks.net and I'd love to connect with 
anyone interested in the approach (or currently using it in their own work).

Thanks,
Shawn

On 9/22/2010 7:41 PM, Willard McCarty wrote:
> The following questions may seem rather naive. I left programming so long ago that the languages I wrote in are likely to be barely recognizable by name. And the programming practices (those we were taught as canonical) are even less likely to be familiar.
>
> First, what is it like to work in an environment such as is provided by the iPad or similar device? How does one think about the task in hand as the task-performer conceives it? What does one take for granted? What are the limitations imposed by the system, and how does one get around them or use them to reshape the task?
>
> Second, how in general does one go about programming these days? Are there more or less standard ways of working? How much of what one can say on this topic would be of benefit if communicated to those who use our tools and do not write programs? Are there good books on "algorithmic thinking" (as a colleague of mine calls it)? How would you like to see the world educated as to what you do?
>
> Morgan Tamplin, a colleague at Trent University (Canada), once wisely formulated the key to understanding what our machines are for as learning to see one's object of study "as data" -- or, I like to say, "as if it were only data". The American anthropologist Pascal Boyer speaks of different modes of reasoning, one of them "scientific", the other "erudite" (as in the humanities), best if they cohabit in the same mind. Tamplin's "as data" and the humanist scholar's native mode likewise.
>
> Comments?
>
> Yours,
> WM
>
> -----
> Professor Willard McCarty
> staff.cch.kcl.ac.uk/~wmccarty/
> _______________________________________________
> List posts to: humanist at lists.digitalhumanities.org
> List info and archives at at: http://digitalhumanities.org/humanist
> Listmember interface at: http://digitalhumanities.org/humanist/Restricted/listmember_interface.php
> Subscribe at: http://www.digitalhumanities.org/humanist/membership_form.php

-- 
Dr. Shawn Graham, RPA
Assistant Professor of Digital Humanities
Department of History

406 Paterson Hall
Carleton University
1125 Colonel By Drive,
Ottawa, Ontario, K1S 5B6
Canada





More information about the Humanist mailing list