Would you consider SAN for SCADA historian?

Hi All
I am rather green in SAN topic which is huge (i did do some short introduction to SAN course).

Lets assume you need to configure SAN (storage area network) for the history records of the SCADA.
1 What things are critical to consider when designing SAN? performance, choice of iSCSI, FC, FCOE
2 Are SANS used in automation systems often or time series databases are powerful enough for the data logging
3 If you ever used SAN in the automation project - is it ok to assume that this will be configured and designed by SAN/network specialist or
an automation engineer should have this knowledge?

Are thoughts and opinions around this topic are very welcome.
Well, we have some customers who (in the past) used HP EVA (model EVA4100) or 3PAR. And others, who used disk arrays (MSA2000, P2000G3, P4300G2). Some iSCSI, some FC.
Later came VSA (Virtual Storage Arrays) - which utilized internal disk drives on HP (or any other ) host servers to create a 2-node (or more) virtual array.
Mostly our people configured the whole thing (as then we were completely supplying everything from servers, storage, via OS and databases [Oracle] to the SCADA/MES systems.

Nowadays, we prefer what I believe is called "hyperconverged architecture" - we get a good machine with a lot of cores, with several TB of disk space (it may be even a combination of SSD and HDD), and a few hundred GB of RAM (depends on the size of application). So, we don't configure the SANs anymore (and even before, HP performed the initial configuration of EVA/3PAR, we only extended it; perhaps reconfigured some things on the FC switches etc).

For virtualization (if used) VMWare, possibly HyperV.
For OS - Windows or Linux (RedHat, Oracle Linux, previously Centos).
For historian underlying DB we prefer PostgreSQL (and also multiple older customers we migrated from Oracle during upgrades).
Our system (Ipesoft D2000) supports redundant historians, as well as redundant application servers. So, 2 physical machines, on each of them one of the (virtual) application servers (there are often other virtual servers).
Each of the historians has its own independent PostgreSQL database, running on a local disk.
If we need a "shared" redundant generic application database, we use PostgreSQL running on Pacemaker/Corosync clusterware inside Linux. On the bottom, there is DRBD running in 2 instances (one on each physical server) which uses 2 local drives (I mean partitions, not physical drives) to create a virtual drive. So the data is mirrored. PostgreSQL runs on one of the physical servers. Some more info in a blog.

Currently, the available HW has enough speed/IOPs to handle our historian's load. On the other hand, the historian is heavily optimized; it even has its own cache to keep the last N hours/days of data in memory and thus avoids asking the data from PostgreSQL when a user opens a graph or requires data from the script.