Worried about PLE
Thinking this very low I googled and saw the debate around 300 or not 300.
I also found some rather heavy reading material and wondered what other specific stats should I be looking to other than a "get more RAM" response?
The servers have 16GB RAM each which is Maxed to 13.5GB for SQL. None of the applications appear to be suffering any performance issues.
I've included a screen shot of PA. Let me know which other stats could help/hinder PLE. Could the current disk read latency issues (with our SAN) have any connection?

Performance Advisor continues to show PLE in seconds. The screen shot you provided (sample mode) shows a server nearly at rest. PLE is 12,100, there are almost no waits, few page faults and no disk latency. A screen shot of the dashboard in history mode showing a 10 – 30 minute time frame when low PLE has occurred would be more instructive.
Are the low PLE values sustained or short term (what does PLE look like when the PA dashboard is in history mode covering a time period of several hours. Is there a recurring pattern)?
When you see a drop in PLE, check for concurrent high CPU, high waits and/or high disk latency. If none of those are occurring then a query or some other operation (CHECKDB, index maintenance, etc.) has caused memory pages to be flushed and new pages brought in, and the server is responding appropriately. If PLE returns to higher levels, there probably isn't a problem.
You can find out what is causing the drop in PLE by looking at the calendar and/or Top SQL tabs in SQL Sentry.
https://www.sqlskills.com/blogs/paul/page-life-expectancy-isnt-what-you-think/
You need more memory.
One of the servers with a PLE of 5.5(!!) is a sharepoint farm DB which is heavily used and running perfectly which is what made me wonder if PA was using minutes rather than seconds in its new version 🙂
Sounds like you already figured this out, but in history mode the Y-axis is showing PLE in thousands of seconds, hence the decimal and "K" after.
Note that SQL Sentry v8 ships with a custom condition which uses Jonathan Kehayias' adaptable PLE formula, which is a much better approach than the old 300 seconds. I blogged about it here:
http://blogs.sqlsentry.com/GregGonzalez/sql-sentry-v8-intelligent-alerting-redefined/
Apparently it is a comma after the (currently) 30. But it is very small and I mistook it for a decimal point. So when I look at it now I see 30.744 (as did Justin) when apparently it reads 30,744
Really the best way to look at PLE is in history mode so you can ascertain patterns. As has been mentioned, certain types of ops are always going to cause buffer churn and low PLE, but as long as it quickly returns to high levels it should not generally be a concern.
To expound on what Justin said, if you highlight a range a minute or two around a drop, then Jump To the calendar or Top SQL, you can usually find the culprit quickly. If it turns out to be a poorly designed user query and/or associated supporting indexes, there may indeed be an optimization opportunity.