Here are some things to be aware of about the upcoming new vidchat platform for member vidchats:
First, it will require some testing to implement. It may or may not be ready for the next show, and it's unlikely we'll release it over a holiday week. It's in the works, but it won't be immediate. We estimate it'll be live within 60 days. But keep in mind, that's subject to testing. Nothing smart goes out without tests, so we're not promising anything at this point.
Second, it will require some learning. It won't be the old system, or work exactly like the old system did. We're not spending 500K on a custom platform (have to make memberships far more expensive for that), so we'll be using a platform that meets several of our key needs, like capturing recordings, permitting unlimited viewers, and providing a means for Q&A.
Third, changing platforms may or may not resolve latency issues in some broadcasts. Here's exactly why:
There are 3 parts to any such system:
- the internet bandwidth for the broadcast (Dr. Farrell’s internet connection)
- the broadcasting machine (Dr. Farrell’s PC)
- the receiving network (the platform where the video broadcast is being hosted)
Part of the issue may not be the current platform at all, which has 24 million users and about 14million live streams per year. There are a ton of successful broadcasts happening on the current platform, as you read this. It may not be particularly tolerant of hiccups in bandwidth or conservative system resources, but before we dump everything at the feet of the current platform, that would be like blaming Ford because your truck doesn't run well on 4 out of 6 cylinders.
In an overwhelming number of cases, when there is latency in a video broadcast, it is latency in the broadcaster’s internet connection (latency or packet loss or upload speed – or some combination of the 3). Those 4 factors are absolutely huge when uploading video live. Raw download speed isn't the big issue. Obviously sending video over the internet requires maximum stability in an internet connection. Any hiccups on the sender’s end become hiccups for all receivers. If *all* receivers are experiencing and reporting the same issues, this is highly likely to be at least part of the cause.
Another possible cause is a pc/system issue (computer resource management) on the broadcaster’s end (e.g. current free RAM, % of CPU use, disk speed [for video processing], and available disk space during the broadcast, graphics processor RAM and speed (which is separate from other RAM), and whether the graphics processor is integrated (cheap and uses the CPU as its brain) or standalone (has its own brain). Those factors or any combination of them determine how likely it is that the video recording will be smooth. Any one of them could contribute to a poor broadcast.
Of course, Giza Support doesn’t have a local technician on site with the host for the broadcasts (our scope is supporting technology on the web site, not technology resident in his office), so we can’t verify that those were contributing factors on Black Friday. We have seen internet latency issues from the same location in the past, however.
The new platform will, like all platforms, be imperfect. However, we cannot promise nor necessarily expect that it will resolve all issues. Worst case scenario, it may do nothing more than remove ads and reveal that the issues are not actually platform specific at all. Or it may be *both* the platform and the the system or connection of the broadcaster. In which case if we see similar issues in the new platform, it falls to the show’s host to actually resolve technical issues related to connectivity or system resources with local professionals that Giza Support can refer but which are out of our reach.
Hoped for Middle ground: the new platform may be more tolerant of a system with internet latency or packet loss issues or system resource issues. If it's actually designed to accommodate sort of an 'everyman' broadcast of great length, it will likely do so by imposing a) a slight drop in video resolution and sound definition and b) an air delay to allow for processing, just like on a live news broadcast. The middle ground would be good. It doesn't mean we wouldn't get *better* broadcasts if the uploading source had fewer latency issues, but the increased tolerance for them would be welcome. Signs are good we might hit this middle ground.
So the bottom line is we ask you to bear with us while we test the new system, and bear with learning how to use it vs. the old one, when we roll it out, and neither set expectations too high (it's not CNN), but keep your fingers crossed that it turns out to be more practical.
In the meantime, some tips:
The recorded version from Black Friday will have the same issues as the live broadcast, because it's a recording of that broadcast. This actually demonstrates that the issues lie in the connection between the sending computer and the platform provider's servers, though it doesn't say on which end. We think the Black Friday session is a throwaway. If you found it useful - great. If not, we're calling in a pass.
If you're a technician capable of providing local / remote support in the following 2 areas:
- diagnosing local internet latency, line quality, packet loss, and upload speed issues, making recommendations to resolve, and followup testing to monitor and verify ongoing results
- diagnosing system resource clogs specifically during live video broadcast streaming from a PC (current free RAM, % of CPU use, disk speed [for video processing], available disk space [for video pre-processing], graphics processor RAM and speed or GPU/CPU interference
Feel free to reach out to Dr. Farrell directly, if you'd like to volunteer to be a local/remote support person. Facility in correctly assessing all of the above areas of concern would be essential, as well as availability before and perhaps during broadcasts. The fact is, we can switch platforms but, without someone remotely supporting the broadcasting unit, we've only changed out a part in that Ford and hoped it did the trick.