Here is my "dungeon door" test mesh that I use instead of teapots. It was really easy to convert an .obj to JSON format in the end; I didn't need to use an exporter or a Python script, I just re-used my C .obj parser code and got it to write all the vertex points to a JSON file. Excellent. I'll upload the programme when it's a little more fully-featured. Now...textures.
That was easy enough! I made my parser export another JSON array (remembering to put a comma after the first array), containing texture coordinates. Then some fairly standard GL texture binding and loading, except that you can use the Javascript Image() object to load the image from a URL. In the shaders I had to use the varying keyword, instead of the in and out keywords that I'm used to for transferring between the vertex shader and fragment shader. Texture sampling was the same. The only other issue was that I used bi-linear filtering here, but it looks like tri-linear filtering and mip-maps require extensions to be loaded; http://blog.tojicode.com/2012/03/anisotropic-filtering-in-webgl.html.
Perhaps some Phong lighting shaders using normals. I also need to think about refactoring these demos into a semi-stable engine where I can re-use the data-structures for many objects/textures/matrices at once. Some manager objects will probably be required. Maybe I should look at extension loading as well.