August 20th, 2014
Instant messaging is not new. Neither is the online chat room. The CompuServe “CB Simulator” in 1980 was probably the first Internet application that we would recognize now as instant messaging. Internet Relay Chat (IRC), the ancestor of today’s chat rooms, dates from 1988. Now add the more modern increase in the use of mobile devices, mix in some annoyance with email spam, and suddenly the venerable old chat is a hot business communication application. When you remember that many of the people now entering the workforce grew up using their phones to message friends, that’s not really a surprising development.
- Why these startups think chat apps are the next big thing in workplace collaboration (GigaOM | Jonathan Vanian) “The idea is that workers are now spending their days immersed in the world of the chat box; a sort of modern-day equivalent of the office water cooler, where ideas and jokes can be shared, but also — thanks to software and the ability to link up to the storage-service providers — the place where documents can be stored, indexed and able to be easily accessed. According to a recent Gigaom Research report by Stowe Boyd, this idea of contextual conversation ‘is likely to become the dominant social motif of the next generation of work-technology applications.’”
- Why enterprise mobile messaging is the latest startup craze (CITEworld | Matt Weinberger) “In libraries, coffee shops, and anywhere else where workers are not in front of a computer all day, text and IM is the smartphone-friendly mode of communication of choice. That means that coworkers need to befriend each other on their personal social networks, or else swap phone numbers, neither of which makes for a healthy or comfortable life-work balance. It’s a lot easier when you empower those same front-office workers with a tool where they don’t need to know the phone number of the person they need to talk to.”
- Beyond Google Hangouts: What these chat apps are doing differently (HSI blog | Ryann Rasmussen) “Many of the chat startups mentioned here [Slack, HipChat, Flowdock, and Convo] are still in their infancy, but they’re already making a splash. Inspired by everything from small businesses to giants, these chat startups are looking at the workplace from every angle. Larger companies like Rally, Atlassian and Microsoft are purchasing these startups, a testament to their potential in offices around the globe.”
- Slack is killing email (The Verge | Ellis Hamburger) “There’s something intrinsic to communicating with a larger number of people that’s going to be difficult to manage, especially given the amount of information we get. Email has gotten worse over the last 10 years or so. Ten years ago, 50 to 60 percent of email was from another person, and now it’s 8 to 10 percent. The other 90 percent is from a machine — email marketing, receipts, new Twitter followers, Facebook comments, check-ins, monthly statements, blah blah blah.”
Articles from Ohio Web Library:
August 13th, 2014
Google made an interesting announcement last week. Because they want to promote the use of secure, encoded HTTPS for website connections, they are going to make HTTPS a “ranking signal” for their search results. In other words, if a website uses HTTPS, it will show up higher in a Google search than a site that does not — maybe only a little bit higher for now since this will initially be just a minor ranking signal, but Google confesses that they may make it a more important signal later. Almost all the reaction was positive, except for tweets from people who work in the search engine optimization business, but as librarians, shouldn’t we be a bit concerned that the quality of information might be judged based on its format instead of its content, just so Google can make a point about web security?
- Google Search starts penalizing websites that don’t use encryption (PC World | Jeremy Kirk) “The move is designed to spur developers to implement TLS (Transport Layer Security), which uses a digital certificate to encrypt traffic, signified by a padlock in most browsers and ‘https’ at the beginning of a URL. As Google scans Web pages, it takes into account certain attributes, such as whether a Web page has unique content, to determine where it will appear in search rankings. It has added the use of https into those signals, although it will be a ‘lightweight’ one and applies to about 1 percent of search queries now…”
- In major shift, Google boosts search rankings of HTTPS-protected sites (Ars Technica | Dan Goodin) “TLS also provides a means for cryptographically validating that a server claiming to belong to Google, Bank of America, or any other website is authentic, rather than an impostor set up to trick users. Over the past few years, American Civil Liberties Union Principal Technologist Chris Soghoian has used a carrot-and-stick approach to persuade more sites to HTTPS-protect their pages. He sometimes publicly chastises companies that transmit sensitive information over unencrypted connections.”
- Google boosts secure sites in search results (Electronic Frontier Foundation | Bill Budington) “This week’s announcement further underlines a commitment to encrypting Internet traffic and keeping user data safe, and encouraging others to do so. We urge Google to go further and carry out its plan to strengthen the preference of HTTPS sites, as well as favoring sites that have configured HTTPS well…”
- Google to reward sites with HTTPS security in search rankings (Forbes | Larry Magid) “This is one more example of the power of Google’s ranking system. While Google doesn’t control content on the web, its search is by far the most effective way for content to be found so anything a webmaster can do to increase a Google ranking equates to more visitors and, in many cases, more revenue.”
Articles from Ohio Web Library:
August 6th, 2014
As if you needed something else to worry about, there seems to be a strong possibility that USB devices can be used in new and nasty ways to damage computers, such as the public computers in libraries. Security researchers Karsten Nohl and Jakob Lell are giving a briefing tomorrow about “BadUSB—on accessories that turn evil” at the Black Hat convention in Las Vegas. Their presentation has already received a lot of attention because they have found a way to reprogram the controller chip in a USB thumb drive so it acts like a different USB device, perhaps a keyboard or network card. And there doesn’t seem to be any easy way (yet) to protect your computers.
- Why the security of USB is fundamentally broken (Wired | Andy Greenberg) “The malware they created, called BadUSB, can be installed on a USB device to completely take over a PC, invisibly alter files installed from the memory stick, or even redirect the user’s internet traffic. Because BadUSB resides not in the flash memory storage of USB devices, but in the firmware that controls their basic functions, the attack code can remain hidden long after the contents of the device’s memory would appear to the average user to be deleted.”
- Researchers warn about ‘BadUSB’ exploit (PC Mag | David Murphy) “A device could, for example, emulate a USB-connected keyboard and automatically send over all sorts of keystrokes that, when combined, could lead to issues—installing malware, wiping key files off a drive, copying files over to the USB device, etc. And that’s just the first example. SRLabs notes that a USB-connected device could also pretend that it’s a network card and redirect the traffic to and from a system through a rogue DNS server. Or, better yet, it could infect that system with a boot-sector virus that could be a bit tougher to detect and remove than your average infection.”
- BadUSB: Big, bad USB security problems ahead (ZDNet | Steven J. Vaughan-Nichols) “The hackers claim that ‘Simply reinstalling the operating system – the standard response to otherwise ineradicable malware – does not address BadUSB infections at their root. The USB thumb drive, from which the operating system is reinstalled, may already be infected, as may the hardwired webcam or other USB components inside the computer. A BadUSB device may even have replaced the computer’s BIOS – again by emulating a keyboard and unlocking a hidden file on the USB thumb drive.’ In short, ‘Once infected, computers and their USB peripherals can never be trusted again.’”
- Don’t panic over the latest USB flaw (Tom’s Guide | Marshall Honorof) “BadUSB is a proof-of-concept attack, designed by security researchers. They’re not going to release it into the wild[…] Furthermore, demonstrating something like BadUSB at a conference like Black Hat is basically an open invitation for the security community to fix this vulnerability before it becomes widespread.”
Articles from Ohio Web Library:
July 30th, 2014
You have to give Google credit for thinking big. Some recent developments seem to indicate that they believe it’s time for Google Translate, which has been slowly getting better, to truly make an impact on the global language barriers to communication. Translate can currently handle 80 languages, and if it’s not yet accurate enough for the medical community, it seems to be good enough for government websites trying to engage the 10% of Americans with limited English proficiency. So what is the best way to accelerate improvement of Google Translate? Google’s answer: They have empowered Translate users to do the job.
- Translate Community: Help us improve Google Translate! (Google Translate blog | Sveta Kelman) “We’ve just launched a new Translate Community where language enthusiasts can help us improve translation quality for the 80 languages we support, as well as help us in launching new languages. In the new community, you’ll find options to help with a variety of things, including generating new translations and rating existing ones. Over time, you’ll find more ways to contribute, as well as get more visibility into the impact of your contributions and the activity across the community.”
- Google wants to improve its translations through crowdsourcing (TechCrunch | Frederic Lardinois) “For those who don’t want to join the community, Google also recently launched a new feature directly in Google Translate that allows you to contribute your own translation when you see a mistake. Google Translate always allowed you to rate translations as helpful, not helpful and offensive, but now you can actually provide the service with the actual correction.”
- Google Translate gets Translate Community to improve translation quality (TechOne3 | Jagmeet Singh) “Apart from making an authorised community, Google has recently added the ability to suggest an entirely new translation directly in Google Translate. To submit translations, users have to click the ‘Improve this translation’ pencil icon and click ‘Contribute’ to submit.”
- Five things to consider before using Google Translate for Bengali translation (Smartling blog | Rehana Parvin) “Every time Google Translate isn’t able to translate a word into Bengali, you can add the word in manually, save the change, and use it the next time. Taking the same example from above, the word, ‘বইয়ের’ is incorrect. The correct word would be ‘বইটি’. If you click on the word ‘বইয়ের,’ you’ll be able to click and choose the correct replacement.”
Articles from Ohio Web Library:
July 23rd, 2014
About four years have passed since Google announced that the company had decided to release a new image format called WebP. Back then, Google estimated that about 65% of Internet traffic was composed of images and photos, and WebP was designed to reduce the size of those image files and thus speed up loading time for web pages that used the WebP format. Lean image formats are back in the news lately because the Mozilla browser group has decided WebP is not the best solution to the problem of image bloat on the Internet, and has decided to release its own solution instead.
- The story of WebP: How Google wants to speed up the web, one image at a time (GigaOM | Janko Roettgers) “Firefox, Internet Explorer and Safari don’t natively support WebP, and it’s unlikely that the makers of these browsers are going to change their mind anytime soon. That’s because like so often, everyone has their own vision of how the future is going to look like. Microsoft is pushing for its own format, dubbed JPEG XR, to replace traditional JPEGs, and Apple has long steered clear of Google’s media formats. The most logical ally for Google would be Mozilla, which has traditionally been a proponent of open media formats.”
- Mozilla’s new Mozjpeg 2.0 image encoder improves JPEG compression (Techspot | Himanshu Arora) “The JPEG format, which has been in use for more than 20 years, is one of the most widely used image formats on the Internet. It’s a lossy format, which means that you can remove some data to reduce the file size without significantly affecting the original image’s integrity. Google has been promoting the use of its WebP image format, a derivative of the video format VP8, but Mozilla has long resisted the call to adopt it.”
- We don’t need new image formats: Mozilla works to build a better JPEG (Ars Technica | Peter Bright) “Mozilla has also been looking at the issue, but the open source browser organization has come up with a different conclusion: we don’t need a new image format, we just need to make better JPEGs. To that end, the group has released its own JPEG compression library, mozjpeg 2.0, which reduces file sizes by around five percent compared to the widely used libjpeg-turbo. Facebook has announced that it will be testing mozjpeg 2.0 to reduce its bandwidth costs, similar to its WebP trial.”
- Mozilla releases mozjpeg 2.0 as Facebook tests and backs the JPEG encoder with $60,000 donation (The Next Web | Emil Protalinski) “Facebook could use the encoder on photos that users have already uploaded to the site, or it could apply it dynamically on images that are regularly accessed, such as profile pictures or link thumbnails. Whatever the case may be, the potential to reduce loading time is very high, given that Facebook is such an image-heavy service.”
Articles from Ohio Web Library:
July 16th, 2014
This weekend at the “Hackers on Planet Earth” conference, the Electronic Frontier Foundation (EFF) plans to demonstrate new open source firmware for wireless routers. While open source wireless firmware is nothing new, in this case, the firmware is designed specifically to support the Open Wireless Movement. This movement is promoting the widespread sharing of unencrypted wireless networks with no password protection, so anyone can easily access and use them. Libraries are big on sharing, of course, and also big providers of public wireless, but will they embrace Open Wireless?
- What is the Open Wireless Movement? (openwireless.org) “We are aiming to build technologies that would make it easy for Internet subscribers to portion off their wireless networks for guests and the public while maintaining security, protecting privacy, and preserving quality of access. We’re also teaching the world about the many benefits of open wireless in order to help society move away from closed networks and to a world in which openness is the default. Our efforts follow the opinion of nationally recognized computer security expert Bruce Schneier, who considers maintaining an open wireless node a matter of ‘basic politeness’.”
- New open-source router firmware opens your Wi-Fi network to strangers (Ars Technica | Joe Silver) “[OpenWireless.org’s] mission statement reads. ‘And we are working to debunk myths (and confront truths) about open wireless while creating technologies and legal precedent to ensure it is safe, private, and legal to open your network.’ One such technology, which EFF plans to unveil at the Hackers on Planet Earth (HOPE X) conference next month, is open-sourced router firmware called Open Wireless Router.”
- This tool boosts your privacy by opening your Wi-Fi to strangers (Wired | Andy Greenberg) “One goal of OpenWireless.org, says EFF staff attorney Nate Cardozo, is dispelling the legal notion that anything that happens on a network must have been done by the network’s owner. ‘Your IP address is not your identity, and your identity is not your IP address,’ Cardozo says. ‘Open wireless makes mass surveillance and correlation of person with IP more difficult, and that’s good for everyone.’”
- EFF wants you to open your Wi-Fi to IMPROVE privacy (The Register | Darren Pauli) “The EFF sees the proliferation of segmented open wireless networks as a key tactic that will foil intelligence agencies’ ability to track individuals. By opening home and business wireless to all, it became more difficult to tie people to their online activity.[…] Provided the software is sufficiently secure, the obvious outstanding threat would be to the open wireless users who could find themselves blamed for online crimes committed by anonymous users of their network.”
Articles from Ohio Web Library:
July 9th, 2014
Our apologies if you have already heard about this, but this news is important enough to bear repeating. This Friday (July 11), the Federal Communications Commission will meet and probably come to a decision about making some sweeping changes to the E-rate program. FCC Chairman Tom Wheeler would like to shift E-rate discounts away from supporting outdated technologies – such as pagers and (eventually) plain old telephone service – to more current technology needs, particularly internal Wi-Fi. Does he have support from the Commissioners to get approval for his proposals? We’ll find out on Friday.
- Modernizing E-rate: Providing 21st century Wi-Fi networks for schools and libraries across America (Federal Communications Commission) “Modernizing our rules to facilitate investment in Wi-Fi would result in a 75 percent increase in Wi-Fi funding for rural areas, which have been disproportionately shut out by the current system. Under existing rules rural schools on average receive 25 percent less Wi-Fi funding for every student, and 50 percent less funding for every school, compared to their non-rural peers, because the current rules often put them at the back of the line.”
- Washington’s Wi-Fi Friday: FCC, Senate push for more Wi-Fi in schools, more unlicensed airwaves (GigaOM | Kevin Fitchard) “Wheeler is calling for new rules to the government’s E-Rate program, which was established 18 years ago to bring internet connectivity to schools and libraries. The program largely accomplished its mission, delivering broadband access to 94 percent of U.S. classrooms and 98 percent of public libraries, according to the FCC. But when the rules were originally written, they didn’t anticipate the wireless connections most devices would need to make that final hop to the internet.”
- ALA encouraged by FCC Chairman’s commitment to a multi-stage E-rate reform (District Dispatch | Marijke Visser) “Mobile internet use in libraries is exploding, and this first step by the Chairman to address this need is important for the vast number of schools and libraries that have not received E-rate support for internal (e.g., Wi-Fi) connections for many years. But this is not enough to meet our national needs. The lack of access to affordable, high-capacity broadband to the building remains a major challenge for so many libraries and schools. Such access must be fully funded for eligible applicants, regardless of any new funding models for Wi-Fi services.”
- E-rate reform: A sustainable path forward for school and library connectivity (The Hill | Danielle Kehl and Sarah Morris) “Simply put, ubiquitous Wi-Fi cannot achieve its promise without a robust wired backbone that is scalable to meet future needs. That’s why a number of stakeholders have recommended that the FCC create a dedicated ‘upgrade fund’ to help schools and libraries cover high upfront costs associated with capital investments to bring fiber to the premises.”
As many of you know, OPLIN and the State Library have sponsored E-rate workshops for public libraries, one in the late fall and one in the winter, for a number of years now. This year, because of the anticipated changes, we are planning to do many more workshops in locations around the state and are also looking into improving online delivery of the workshops. Watch for details early this fall.
July 2nd, 2014
How many times have we told you guys that mobile computing is important? (Looks like about one out of every ten 4casts.) So, if you have a website – and every library does – what’s the best way to design a mobile version of your site? At the moment, there seems to be three leading contenders for the right answer to this question. The oldest way is to build a separate mobile website tailored for smartphones, usually with the letter “m” prefixed to the URL (e.g., m.oplin.org). These days you hear a lot about responsive web design, in which the website is designed to automatically respond to the mobile device by rearranging the elements of the website to best advantage. And sometimes you hear a third possibility mentioned, usually called “adaptive” or “dynamic,” in which the website server detects the device and serves up a ready-made page for the correct screen size. Which is best? Responsive design currently seems to have the edge, but the decision is not unanimous.
- How responsive design increases the results of your online marketing (Search Engine Watch | Kristi Hines) “Responsive design allows users on any device – desktop, smartphone, or tablet – to have the same experience. Some businesses choose to go with a mobile-only and desktop-only experience, but the trouble with this is the lack of consistency between the two. People who make a purchase on the desktop site will have a completely different purchasing experience on their mobile. And the most troublesome issue is generally how a mobile-only design will not include every page that a visitor will want to see.”
- Why responsive web design is the cornerstone to any mobile strategy (Business 2 Community | George Glover) “Google has advocated for responsive web design and recommends it for mobile configuration with its application (app). The company further states that responsive web design is in fact the industry best practice. So why does Google advocate so insistently for responsive web design? It’s because these types of sites have only one URL and the same HTML, neither of which change regardless of the device being used.”
- Report: Mobile “configuration” errors cause 68 pct. traffic loss (Search Engine Land | Greg Sterling) “By ‘mobile configuration’ the company means responsive design, dynamic serving or dedicated mobile sites (separate URLs). BrightEdge found no significant general ranking difference among the three approaches.… Yet improperly configured mobile sites showed a much worse outcome: ‘an incorrectly implemented site resulted in a drop in smartphone rank by almost two positions (1.82 on average).’ That lower position translated into a 68 percent decline in traffic. In terms of configuration errors, BrightEdge said responsive had none (which makes sense). Dynamic serving saw a 30 percent error rate. But separate URLs (dedicated mobile sites) saw a massive 72 percent configuration error rate.”
- Responsive design vs. mobile websites: And the winner is… (iMedia Connection | Brandt Dainow) “Responsive design rarely provides a best-of-breed solution for the mobile user, while mobile websites make sharing elements difficult. It seems to me the solution to this is to use ‘adaptive design.’ Under an adaptive design system, the server works out what type of device is connecting and serves a combination of shared and unique elements. For example, it might use the same content for all devices, but use different artwork for smaller screens.”
Articles from Ohio Web Library:
June 25th, 2014
Like just about everyone else, librarians know and love Flickr, Pinterest, and perhaps Instagram. Most of us have heard of Imgur, Snapchat, and Vine, and know that they are wildly popular. But we may be overlooking the significance of this interest in pictures and video on the Internet. The phenomenon of the ever-increasing amount of images that get sent every day over the Internet has (of course) been named: the visual Web. The problem for libraries is that librarians have a strong tendency to communicate online in text, and the same goes for library websites. But the visual Web illustrates that Internet users are much more likely to prefer websites that communicate through heavy use of images.
- Twitter has been too slow to catch up with the visual Web (ReadWrite | Lauren Orsini) “People have grown tired of text. And with faster network speeds, their devices can load images just as quickly as they once loaded simpler applications. From the days of cave painting, humans have always been visual creatures. As attention spans shorten and Internet speeds increase, it’s clear which we prefer.”
- The visual Web: How it’s connecting marketers to customers like never before (Business 2 Community | Olivia Cole) “The great thing about the Visual Web, says [VP of Marketing and Insights of comScore, Andrew] Lipsman, is that it enables brands to connect with their audiences on a different level than ever before. Lipsman gives Starbucks as an example. ‘It’s easy to feel like a cog in a machine in the long line at Starbucks every morning. But Starbucks’ Instagram account shines a different light on the experience, making it meaningful and extraordinary.’”
- The growth of the visual Web in 5 charts (Digiday | Matt Van Hoven) “Certainly, one needn’t upload an image or video to share it; all you need to do is find something you like online and share it. But in terms of new content, the numbers are staggering just the same. Snapchat boasts 276,000 snaps per minute. If that was miles per second, Snapchat would be faster than the speed of light (186,000 mps). Facebook comes in just behind with an average of 246,000 images and videos per second, which is roughly the population of Plano, Texas.”
- Journalists need to know how the rise of the mobile, social, visual Web impacts them (Forbes | Lewis DVorkin) “For me, statistics like these confirm the news business is on another collision course. A decade or more ago, journalism collided with the freedoms of digital publishing. Next came the collision with social media. Now, it’s colliding with mobile, social and the visual Web. Journalists should take note.”
Articles from Ohio Web Library:
June 18th, 2014
Companies and organizations that make predictions about the future often seem to focus on a five-year span. So in 2013, those organizations were developing their forecasts for 2018. It takes a few months, then, for those predictions to get published and disseminated in the media. Now that we’re a few months past 2013, we’ve seen a number of articles lately based on those predictions, telling us how things will be in 2018. Here are four of those articles, which may be of interest to library tech folks.
- E-books to outsell print by 2018 says new report (BBC News) “Tim Waterstone told the Oxford Literary Festival in March that ‘every indication – certainly from America – shows the [e-book] share is already in decline. The indications are that it will do exactly the same in the UK.’ But Phil Stokes, an entertainment and media partner at PwC [Pricewaterhouse Coopers], said: ‘This growth is being driven by the internet and by consumers’ love of new technology, particularly mobile technology.’”
- After 2018, your PC won’t be the main way we get online (Huffington Post | Timothy Stenovec) “Last year, PCs accounted for 86.4 percent of all Internet traffic. By 2018, the PC share of Internet traffic will drop to just 50.5 percent, according to Cisco. Compare that to smartphones and tablets, which last year accounted for a measly 5 percent and 3.1 percent, respectively, of global Internet traffic. In 2018, smartphones will jump to 21 percent of traffic, while tablets will account for 18 percent, according to Cisco.”
- Cat videos, binge TV watching will account for 84 percent of Internet traffic, Cisco says (Re/code | Amy Schatz) “In the U.S., Internet traffic is expected to surge from 15 exabytes per month last year to 37 exabytes monthly in 2018. … Internet video is expected to account for about 84 percent of all U.S. Internet traffic in four years, up from its current 78 percent, Cisco says. That figure also includes IP VOD, which is basically pay-TV providers’ on-demand video services.”
- Reminder: Nobody has a clue how many wearable devices will sell in 2018 (Time | Harry McCracken) “Although I usually try to steer clear of making tech predictions myself, I am willing to make a bold one about the wearable market in 2018. I hereby declare that whatever it looks like then, the chances are zero that anybody will exclaim, ‘Gee, this was all so utterly predictable back in 2014.’”
Articles from Ohio Web Library: