A little bit about me

lken

New Member
#1
:yup:Hey everyone,

My name is Laura. I just graduated from Wildlife and Fisheries Management and am looking to get into sustainable aquaculture or fisheries. I'm working for a professor right now and trying to do statistics without the perfect datasets that we get in class. So far I'm finding it really challenging and haven't gotten too far (see my post on unbalanced designs, and non-normal count data is SPSS), but I'm willing to learn and am really excited that I found this forum. After my project is complete I plan on learning R commander, because it seems like the most flexible program from what I gather from peoples conversations here.

Any suggestions on what books to get or how to get started would be great!:yup:
 

jpkelley

TS Contributor
#2
Hi lken. Welcome to TalkStats! Sorry about the delay in reading your introductory message.

Sounds like some interesting work you're doing. Maybe tell us a bit more detail about what you're working on?

Good to hear you're moving towards R (otherwise known as Statistical Catnip, for its addictive qualities). R Commander is a decent way to get introduced to R after using a GUI program like SPSS. Eventually, you'll likely want to move to something like RStudio to organize your workflow. I would also recommend R package "ProjectTemplate" for setting up reproducible datasets and analyses.

You will find various opinions about good reference texts to get you started with analysis using R. I would look at intro books by Michael Crawley and, for analysis of ecological data, the books by Alain Zuur (Mixed-effects models...). The UseR! series of books, which should be available for download at any university library with an agreement with Springer publishers, are also very useful.
 

Dason

Ambassador to the humans
#6
!! Congrats. I usually rely on you other meatbags to notice these types of things. I should program in a check every now and then to scan post counts and see who is near milestones.
 

trinker

ggplot2orBust
#7
TE is the 1000 post detector guy. I'm the close to 300 posts detector (In fact I'm about to nominate someone who's getting close). But TE's fooling around in the jungles, calling it work. :) So the big 1000 gets unnoticed :(

Congrats Bryangoodrich and Bugman and welcome aboard lken :welcome:

PS Dason when you call us "you other meat bags", if you are attempting to put yourself in that category you's have to have more than an iron core so the phrase you should have been "I really on the meat bags, as I am a tin can"


OH PPS Dason congrats on the big 5555 posts :)
 

bryangoodrich

Probably A Mammal
#8
Shouldn't be too hard to create a script using curl to access a page or profile, wherever their post count is shown, and web scrape the relevant information. Have it run daily or something and alert you whenever a condition is met. I'm just saying. It's not too hard :p
 

Dason

Ambassador to the humans
#9
Shouldn't be too hard to create a script using curl to access a page or profile, wherever their post count is shown, and web scrape the relevant information. Have it run daily or something and alert you whenever a condition is met. I'm just saying. It's not too hard :p
Sounds like a job for our local curl expert. Oh wait that's you :p
 

Dason

Ambassador to the humans
#11
Oh I know enough that I could probably get by. (Although I'd probably use wget - it's the tool I typically use to grab webpages - although if you need anything sufficiently complex then curl might be a necessity)

I was just pointing out that you're the resident curl expert and I don't particularly care - you meatbags can deal with these celebrations while I [Tex]\sout{continue\ on\ my\ path\ to\ becoming\ skynet}[/Tex] study some more.
 

bryangoodrich

Probably A Mammal
#12
Code:
sudo apt-get install curl
curl http://www.talkstats.com/showthread.php/23884-A-little-bit-about-me?p=78196 | grep "<dt>Posts</dt>" | grep -o "<dd>.*</dd>" | cut -d"<" -f2 | cut-d">" -f2
Probably not the best approach, but it'll grab post counts from this page! Just need to associate them with identities (stored several lines above the posts in the HTML). I'm sure web scraping each person's profile would be a better option (total posts are on the line below "total posts").
 
#14
Hey thanks, this looks like it's going to be really useful. You've convinced me! I'm going to download R onto my computer right now and start exploring.
 

bugman

Super Moderator
#15
Thanks mate, I think it was during our most recent conversation about linux. I hope the '000th was insightful...

meh, maybe not.