The Translational Genomics Research Institute (TGen) is a non-profit 501(c)(3) organization focused on developing clinically relevant medical diagnostics and smarter treatments related to genomic profiling. I’ve had the pleasure of working with the institute for over a year on a project related to breast cancer, and thought I’d share a widget I wrote keep you up to date on TGen’s latest news items.
The TGen News: Dashboard Widget for Mac OS X v1.1 gives you an always up-to-date set of headlines, pulled straight from the TGen news feed.
At TGen, investigators are pushing the limits of cutting-edge research and technology to discover the genetic cause of disease. Experiments that were impossible and impractical only a few years ago are now conducted every day.
Discovery fuels TGen’s translational research and lies at the heart of our scientific investigations. TGen’s research divisions are designed to foster a wide range of genetic discoveries. These divisions draw heavily upon TGen’s scientific platforms to expedite findings. TGen’s labs are staffed by teams of researchers focused on making genomic discoveries in common diseases and disorders in the areas of oncology, neurogenomics and metabolic disease.
Note: This free product is provided by Preston Lee, and is neither officially endorsed nor supported by TGen.
Textbook publishers in 2011 still aren’t fully appreciating the impact the Internet will have on their industry. A reasonably forward-thinking individual might optimistically assume the industry is self-correcting towards the wants and needs of consumers, but that doesn’t seem to be the case. Let’s explore:
Electronic typesetting.
Physical textbooks obviously can’t be reissued every time a typo is corrected. That’s fine, so we can keep making large textbook changes via en-mass “editions” to save typesetting efforts.
But electronic textbooks have many not-so-obvious differences.
Screen sizes of reader hardware/software vary dramatically.
Even if screen sizes were the same, it is of tremendous value to allow the user to change font and text size.
Some screens support color, while other don’t. A wonderful color graphic may appear a blobby mess on a monochrome reader.
The concept of a “page” no longer exists, due to #1 and #2, above. Content cannot simply say “See page 32.” References must be dynamic links, instead.
Content can (and should be) linkable. Obvious examples are tables of contents and figure references. External links need to be supported, as well as more sophisticated “interactive” embedded content items. (A mathematics textbook with an exercise that asks, “Y = 3X + 2. Calculate Y for the following X values: 0, 4, 5.7.” should also grade the assignment as well. Why do I need a completely different book for this?)
Searching, highlighting, note taking, and content sharing are all critical “must have” features for electronic texts.
Open data interchange is probably the biggest techno-political challenge. Retailers aren’t yet jumping on the opportunity to exchange data with the competition. (But they will need to conceed because it’s what the consumers and publishers will want.)
For all these reasons, please stop calling your PDF renderings “eBooks” and then calling it a day. PDF documents cannot “reflow” the way a web page does, and make reading extremely awkward because of reasons #1 and #2, above. In short, direct PDF conversions–such as those used by the University of Phoenix–don’t have any of the typesetting considerations or functional niceties of modern electron book formats, and should be avoided. Schools need to stop accepting cheap “Print To PDF”-style textbooks, as well as “eBooks” that can only be read through a web browser using special software that doesn’t support any of the above features. If your eBook implementation is less powerful than a physical book, you’re doing it wrong. Please improve!
Separation of form and content.
Typesetting concerns do not mean all is lost. If anything, it’s a wonderful opportunity to make revolutionary steps in improving the way written knowledge in transferred. As we’ve learned from the web, it’s entirely possible to design for dynamic layouts given you can make at least a few constraints.
Physical textbook typesetting needs to be optimized for a specific target. Electronic typesetting needs to optimize for overall good layout within a range of constraints. Web applications can generate multiple document types for the same content, and with such nimble requirements for electronic media, we can do the same with updated forms of typesetting languages like LaTeX.
eBooks don’t require a local sales representative.
It’s nice, I suppose, to have a rep on call to overnight you a textbook on a moments notice, but that’s not necessary when I can click a button on my iPad. The issue here is misaligned incentives in the payment of distributors.
To use a real-world example, my local Pearson rep seems to earn commissions on physical textbook sales to my classes, but not electronic copies sold through Pearson affiliate (or subsidiary?) CourseSmart. She’s always happy to help when I’m interested in buying paper, but suddenly goes unresponsive when I have a tangential question about an electronic book.
It’s not her job to help with online sales. That’s an entirely different business unit or whatever, so who cares about that, right? Here are some great properties of CourseSmart, Pearson’s chosen eBook sales system:
You can only access your electronic textbook for about 6 months. That’s right, you don’t own it. You’re essentially renting it for the semester.
The pricing is pretty high, especially considering you can often sell back physical books after the semester. You always get $0 after the rental period. Savings? Please.
You can’t really do anything neat with the electronic version, like download a simple effing PDF, even if you’re a legitimate, verified instructor that can already download content such as instructor solutions manuals and slides. (They don’t trust us. Trust me on that.)
Pearson and college sales/support infrastructure and personal incentives aren’t (yet) set up to fluidly handle electronic texts.
In short, CourseSmart sucks. I thought it was going to be cheaper, simpler and generally better for students to use the electronic versions, but given the high cost “rent”-like nature and lack of features, it’s not great. Personally I’m looking to switch to publishers that understand ebook-oriented use cases and build their product to fully take advantage of the Internet, rather than just go through the motions. PragProg is a great example of a technical publisher that’s moving us in the right direction. (I send them a lot of business and highly recommend you check them out, too!)
I have to believe that the profit margins on selling an 800-page textbook as a $60 “online view only for 6 months only” product are greater than a $100 hunk of tree, especially considering the expenses of transporting, retailing, and commissioning (or marking up) every step. I suppose many of those people don’t want to go electronic due to fear of job loss, even though the jobs may simply change, instead.
Fast release cycles.
With properly designed exchange formats, textbooks and metadata can be pushed and pulled between publisher, retailer and consumer in under a second. The concept of “this years edition” starts to lose meaning if the publisher can fix a typo and push out a new revision with no more effort than updating a wiki page. This posses serious technological challenges with ISBNs, Library of Congress records etc., but all these things all fixable, and none of the solutions have anything to do with building a new PDF that gets emailed to me. (Even Amazon doesn’t do this right yet, even with their .azw format. When you agree to receive an optional update of a book you’ve purchased from Amazon, you lose all your notes and highlights from the original version. Lame.)
We need to embrace this idea of rapid content change, rather than cling to the idea of annual product releases. We can do it. Really.
Closing thoughts.
All the players in the textbook industry have different incentive systems, but all have much to gain. Rather than using the friendly neighborhood college bookstore as a primary retail outlet, the supply chain process… no, the entire industry, needs a comprehensive dose of cold water to the face. All is not lost, but in 2011? They still don’t get it.
To help answer the question of why it takes so long to get an event recording on disc, even for small events, I’ve put together this high-level, high definition (720p) behind-the-scenes walkthrough of the post-production editing, mixing, mastering, replication and packaging processes used for the ahCOOTstic Rock event (and others) brought to you by the Phoenix Independent Musicians’ Project (PIMP Google Group) and Sonic Binge Records.
On my way to the office this morning my bag seemed especially heavy, the natural effect of stuffing an attaché with a MacBook Pro, iPad, iPhone and Kindle. I felt silly feeling it necessary to keep all these electronic gismos simultaneously latched onto my shoulder within seconds reach of my left hand, each ready to perform some specific task that required firing on its individual display and taking a few milliamp hours off its individual lithium ion battery pack.
Each of these devices is especially good at performing certain types of tasks, to the point that it also feels silly to not use the tool best suited to the job. To a computer scientist all four of these are technically Turing machines–more commonly known as “computers”–but each has its own practical strength and weaknesses. And while carrying a single device solely by itself one becomes incredibly mobile, taking all four is not. I’m like a sleep-deprived mother of quadruplets sluggishly pushing a custom designed stroller through the grocery store. The monstrousity of brushed metal widgets, cables and wall warts I’m toting reminds me of that fictional car designed by Homer Simpson.
But such are the pro and cons of appliance computing. Not all of these hardware devices are technically needed on this particular Wednesday, but the combination of specialized functions provided by the union allows me a more productive day. I could have left at least one at home, provided that I had a reasonable amount of interoperability between them to shuffle data.
…
Stop. Oh god. I saw this coming the second Amazon announced they would use their own locked-down format (.azw/.mobi) for eBooks purchased through their store. (Aside: If you’re interested in Kindle encryption you may eventually find yourself at my KindleTools site for finding PIDs.) My biggest of fear with regards to the emerging ebook market is now in full swing. Not only are there subtle, often incompatible (and proprietary) differences in ebook data between reading application software, but most of the time I can’t even legally attempt it. It’s like Microsoft Office vs. Word Perfect vs Lotus Notes vs The People of Earth all over again.
Each content retailer is trying to be the de facto digital ebook data locker for the entire market, and the folks at the top of the food chain–most notably Amazon–have no business interest in supporting standardized (or at least conventionalized) data interchange with less popular consumer applications and devices. But why would they? If they can provide the content and the software and the hardware with a majority of the market, why not do everything possible to lock consumers into the monopoly? Here’s a painstakingly detailed scientific visualization of the current eBook market:
Let me make this clear: I am no stranger to paying for books. I read a LOT, and especially over the past year it hasn’t been unheard of for me to spend well over hundred dollars per month on eBook content alone, which I do for many reasons. Here’s the 8th-grade equation demostrating how I can scientifically demonstrate the value of this technology in my life:
Knowledge Gained (in the fictional unit of “knols“, K) x Ease of Future Reference (in the subjective economic unit of utils) / Content Cost (in dollars, $) x Total Consumption Time (in hours, 3600 x s)
This new unit of electronic book value that I’ll refer to as a Vebu–short for “value of ebookS unit”–reduces to this:
Vebu == knol utils per 3600 dollar seconds == uK/3600$s
In other words, we need to maximize the availability of meaningful information (knol utils) at a minimum of money and time (dollars hours) to achieve maximum value for our electronic virtual book libary, Vebu. A simple, unsophisticated yet meaningful quantity.
But here’s how this effed up market effects Vebu:
I have no less than 7 different, largely incompatible pieces of eBook reader software on iPad alone, as of today. Kindle, iBooks, Borders, B&N, Stanza, Free Books and Wattpad. (Effect: lower u, lowering Vebu.)
Borders, Barnes & Noble and the other brick-and-mortar vendors are freaking out, scaring they’ll become the next Blockbuster of the Netflix era. Each has their own application that works primarily with their own store, but not much else, forcing you to use their reader. Not all software is availble on all platforms, though, sometimes making lookups a major pain, and different retails of course carry different publishers, so it’s easy to unwittingly get sucked into all of them. (Effect: lower u and higher s, significantly lowering Vebu.)
None of the distributor reader apps are keen on “sharing” your content with friends/colleagues, forcing others to re-purchase content you should have been able to at least “lend” to them in the freakin’ first place. (Effect: higher $, lowering Vebu.)
O’Reilly, PragProg and other publishers don’t think the major distributors should be necessary, and some are leading the charge buy allowing you to directly purchase digital editions in a variety of formats. This is fine–I have no major qualms about this–but since most readers applications are trying to push you to the store of the vendor that wrote the apps, importing data can be a headache. (Effect: higher s, lowering Vebu.)
Amazon, already having a huge content delivery infrastructure, offers propriety features such as cross-device synchronization of bookmarks and highlights that isn’t as good in others. The Kindle hardware will also read to you in the car, but they only sync with Amazon services; Apple’s iBooks/iTunes is better with PDFs but doesn’t have text-to-speach; Stanza aggregates many different content sources but isn’t as great with commercial stuff… everything has distinct pros and cons. They’re all different and I have to use all of them because they can’t/won’t talk to each other and I can never remember which damn content locker to which I committed my stuff. (Effect: lower u, higher s, significantly lowering Vebu.)
The problem grows exponentially greater as more retailers, publishers, application developers, and independent authors enter the market, intentionally building walls that consumers have no interest in observing.
Here’s what needs to happen.
If you’re Amazon, Borders, B&N, or really any retailer that is gung-ho about becoming the provider of individual data lockers, that’s fine, but you need to give us the key. It’s understandable that you’re reluctant to open up your formats in a way that could be consumed in ways you can’t control, but consider this: if you never figure out how to allow publisher content to cross application and retailer boundaries, you are effictively capping Vebu to artifically low levels. If you instead focus on optimizing all the variables instead of restraining them, you’ll have a platform unmatched even by Amazon. I, for one, would switch to it in a heartbeat.
I’ve slowly updated components of The $1K Home Studio over the last few years, but have never had a low-cost, DIY solution for disc replication. After playing with external CD burners and evaluating various proprietary hardware options such as the Aleratec auto-flip burner , MicroBoard tower replicators amongst many others, I decided that the current commercial solutions are nice, but most definitely overpriced. So I decided to develop my own solution. This custom-built behemoth is built from common off-the-shelf (COTS) hardware from Fry’s Electronics and inexpensive commercial software. It costs less to own than commercially branded replicators, and also functions as a normal desktop computer since it runs Windows 7 and Linux. (I took care to also buy a Gigabyte-brand motherboard that supposedly supports the OSx86 (“hackintosh”) project, but have had little success with the installation.)
Hardware
Intel i5 750 64-bit CPU. (Features 4 cores.)
4GB RAM.
8 x (yes, eight) Lite-On CD/DVD 5.25″ SATA burner drives.
Gigabyte motherboard with lots of SATA ports.
Add-on SATA card. (Most motherboards won’t have enough connectors, especially if you have 8 x burners plus 4 x hard drives. 🙂 )
Big-ass power supply. (The first one I bought wouldn’t even boot the thing. I put in a monster and everything started working.)
Software
The point of all these burners is to burn simultaneously to all of them, but Windows 7 and OS X cannot do this out of the box. Only a small subset of CD/DVD burning software on the market supports parallel burning, and some only seems to support multiple burners for specific types of burns. What’s worked best for me so far is…
Nero Multimedia Suite 10 for concurrent audio and data burning with multiple burners. You don’t have a lot of easy-to-use alternatives here, and I’ve also noticed a few glitches with Nero. Keep your eye out for sales here and you can pick up a copy dirt cheap.
Acoustica CD/DVD Label Maker for concurrent LightScribe replication across multiple burners. Again, not a lot of options here. The free software from LightScribe.com does not support multiple burners, though some vendor-specific bundles seem to. (LaCie’s LightScribe software in particular appears to support simultaneous LightScribe burns, and they also have a Mac version. I would have went with a Mac-based solution, but 8 x USB 2.0 drives probably would not work so well.)
I decided to create all my replicated discs using LightScribe technology. This allows me to flip LightScribe CD-Rs upside-down in the burner and use the laser to burn custom graphics onto the top of the disc. I also made the command decision to use COTS cd sleeves instead of CD Jewel cases or slimline cases. The plastic ones are more expensive, always crack, and are pretty much useless from the start since most people seem to rip their CDs nowadays anyway. Sleeves protect the disc, come in many colors, are far less expensive, even cheaper in bulk, and perhaps best of all can be printed on directly though ordinary laser and ink jet printer.
System Pros
Inexpensive initial fixed cost of hardware parts and software licenses.
Inexpensive variable cost per disc since LightScribe labeling uses the drive laser instead of ink. There are no costly consumables to replace. (Ordinary LightScribe media purchased in bulk works great.)
Quick data, audio and LightScribe replication using 8 concurrent burners.
Doable by anyone capable of building of PC with a little time can build one.
Functions beautifully as a normal desktop computer.
System Cons
Not completely automated like some commercial units because disc loading, unloading and flipping (if using LightScribe) is a manual process.
Still uses CD-Rs. These are not the same as commercially pressed mass media discs, but a lot cheaper.
(This one is only applicable to audio.) I’ve yet to find inexpensive parallel burning software that can handle DDP images. (The standard in “Red Book” audio CD mastering.)
Since LightScribe labeling uses the drive laser instead of ink, disc labels are grayscale only. (Note: You have a lot of options in disc color, though, so it’s not a big deal. Just use your creativity.)
Replication Process Overview
My primary purpose for this buildout is to replicate audio CDs as quickly as possible for Sonic Binge Records: the awesome music production company. In particular, I need to quickly replicate a pancakes worth (usually 25-50) of audio CDs as inexpensively as possible. After much trial and error with the process, this is what I’ve found works best.
Create final CD master image. (For me that’s using WaveBurner on a Mac. For replication purposes it doesn’t really matter as long as the master is good.)
Take four empty CD pancake containers and label them “Blank”, “Burned”, “Labeled”, and “Ready” to create an assembly line process. You can of course save these for future jobs.
Use Nero Burning ROM to replicate batches of 8 at a time. When they’re done, be sure to put them in the “Burned” stack so you don’t get burned discs confused with “Blank” discs.
While they’re burning, create a square grayscale graphic for LightScribe burning. (Free label creator software is available, though anything like Photoshop works too. I usually use a combination of Photoshop and Acoustica.)
Use Acoustica to label batches of 8 at a time. Each batch will take a while. Full-disc burns seems to take around 30 minutes per batch: much longer than the data/audio side of a standard CD-R. Moved discs to the “Ready” pile when they’re done. (Note: The “Labeled” pile is for discs that have been LightScribe labeled but not burned with data or audio. You can end up in this situation when using multiple computers to do burning.)
While they’re burning, use your favorite document application to design your printed CD sleeves. I’ve started buying color variety packs in bulk packs of 300 to keep options high and costs down.
Bulk print the entire order of sleeves in a single run. As long as you can set the size of the feeder tray, your existing feeder should work fine. (CAUTION: remember that the “window” is made of plastic, and can melt if exposed to heat. Think twice before trying your laser printer. 🙂 )
Take discs from your “Ready” pile (as they finish getting labeled) and slip them into sleeves to create the final product, suitable for general distribution. The imaging lasering adds a great, distinctive touch, and of course you can get as creative as you want with the sleeves, too.
Done! (aka beer time.)
Costs
Fixed: ~$1K for the machine build, with about $400 of that just for the burners. I reused/reposed parts from old junker machines where I could, and could have saved some money by buying online. I was in a rush and just went to the store.
Variable: Roughly $0.40 – $1.00 per disc, depending on the disc quality, packaging, ink etc. you decide to use for each project. (All things considered, the $0.40 version looks pretty decent!)
Closing Thoughts
If you’re a musician without computer skills I would not recommend attempting this project, but if you feel fairly comfortable putting together machines, it’s honestly not that hard. It’s just a PC, after all. (Disclaimer: I do have a degree in Computer Science and Engineering, so my perspective of “not that hard” may be a bit skewed.)
I hope you’ve found this rough how-to guide both inspirational and informative. It’s very useful to have a replication machine handy, and if you’re actively working with people on projects intend for distribution it’s a great investment!
Please use this comments section for all your general comments and questions and I’d be happy to address them. Thanks for reading!
My rating: 5 of 5 stars
Meltdown is a evidence-based, academically credible, and brutally honest analysis of the causes and effects of economic depression faced in the United States since the early 1900’s. Thomas Woods’ almost adversarial opinion of the Federal Reserve is approached via many different approaches and data sources, as is his affinity of Austrian business cycle theory. (As opposed to Keynesian economics primarily seen in the U.S.)
For those with interest in macroeconomic theory or the effects of government intervention on both business and individual finance, this is absolutely required reading. Those with politically libertarian leanings will also find many of the facts presented within outright shocking. I personally finished the electronic version of this book with over 10 pages of highlights, and plan to continue following Woods’ work.
One of my biggest business frustrations in 2009 has centered around Search Engine Optimization (SEO): peoples fundamental misunderstandings of what SEO is, what it theoretically accomplishes, and the large number of shysters scaring businesses into pursuing activities not nearly as important as they are made out to be. Inquires usually go like this..
Preston,
My business–ACME Tires–has a basic website for customers with our logo, contact information and such, and am interested in generating more business out of it. I have asked a few people for recommendations and am now talking to several SEO providers that can provide service ranging from $100-$1,000/month. What do your SEO services cost and what guarantees do you make? (I need to be #1 on Google.) Thanks,
Alice
My initial natural inclination is to leer at my computer monitor and internalize a snide response, however, it’s not the customers fault for having a convoluted understanding of SEO, so I often send a polite, brief response, from a science and engineering standpoint. At this point, the recipient usually dismisses the information and goes about spending 1000% more than they should on services. Here’s the lowdown in plain English..
Legitimate Motivations For SEO
ACME stands to see legitimate value in several key ways from having their web presence tweaked by an “SEO expert”. Notably:
Higher rankings in Search Engine Results Pages (SERP). When I search for “tires phoenix, az”, ACME wants to come up as the #1 organic search result. This increases visibility over competitors and thus increases the liklihood that the searcher will click on the ACME page synopsis (and be directed to the ACME website).
Low Advertising Costs. When ACME uses Google AdWords to pay for ad placement in search engine results pages, Google must determine an appropriate cost for a click-through event on the ad. (In other words, ACME will pay Google whenever a user click on an advertisement and is directed to the ACME website.) The algorithms for making the cost decision are not public information, but are based partly on relevance of content. If Google thinks ACME Tires is the best thing since sliced bread, costs will be lower than if Google thinks ACME is a bakery or jeweler.
Illigitimate Terminology
The very legitimacy of the term and notion of “Search Engine Optimization” is debatable. The core function of a search engine is to guide people to content in such a way that the “right” resources can be found using brief, relevant terms. The job of the ACME Tires website is to provide information and services to ACME customers regarding tires. It is notACME’s job to be an expert in the search engine marketplace. It could be argued, then, that the notion of SEO is a moot point, as it should be the job of the search engine vendor to figure out how to best index and present content in an optimized way. This being said, the Developer of the ACME website does have a list of technical tasks that need to be done to assure that content is well indexed and legitimate best practices are used–which I will not go into here–to put the most important site concepts at the forefront of search engine visibility. But we should NOT think:
the ACME website is part of the search engine itself,
the site cannot be “picked up” by search engines without extensive blackhat techniques, or most importantly,
it is ACME’s job to make sure search engines (Google, Bing, Yahoo! etc.) function properly.
The term “optimization” as used by most SEO companies can be better described as “gaming”. Search Engine Gaming (SEG) is a more accurate term than SEO because it reflects that the intent of site tweaking is to gain marketing favor, and improving content from the standpoint of the consumer is of secondary concern, if at all. From this point forward I will refer to activities that both improve marketing value and improve content consumability as “SEO”, and activities that improve marketing value but are indifferent to or negatively impact content as “SEG”.
Blackhat SEG Shysters
An unfortunate number of sleezeballs sell ethically questionable “SEO” services. This is not to say that there isn’t technical work being done nor that they cannot show marketing results, but they choose to do so in ways that make reasonable engineers cringe in disgust. No definitive list of “black hat” activities is completely agreed upon, and as with issues like U.S. health care, it’s a highly subjective topic wherein opinions greatly vary. Unfortunately, those that have the most to gain (vendors) are often leading the debates and giving the seminars, which is skewing public perception of SEO and what is/isn’t necessary. Common activities that I consider black areas (or grey, at best) include:
Keyword Stuffing. One of the easier ways to increase SERP placement is to cram as many important keyword and search phrases into your website as possible. I personally define keyword stuffing to be, “Page copy intentionally packed with a set of repetitive phrases to the point of becoming frustratingly redundant, difficult to comprehend, or otherwise awkward to read.”
Referrer Parsing. Whenever you click the ACME ad, the server running the ACME website knows the website from whence you came. When you come from a search engines, the site may be able to determine the search terms you used to find the link. This detection can all happen before the ACME website is rendered, which means when you search for “tracktuff tires, az” and click through to the ACME website, the ACME webserver can dynamically generate a headline reading “TrackTuff Tires Now 50% Off In Arizona!”, regardless of how relevant the “tracktuff” name or brand actually is to the ACME website. Now, for some reason, all the SEO consultants I’ve met that are doing this seem to think they invented it. (Seriously, I even know one guy that’s trying to get a patent for it.)
Automated Article Submission. Databases of articles are a great place for users to do general research and discovery. If you’re automating “article” submission to hundreds of databases simultaneously, however, you’re submissions will almost certainly be little more than biased PR and marketing content oriented towards getting links to ACME. Actually, there are many “article databases” that fully acknowledge and support this as a way to increase visibility of their own ads.
Automated Link Generation. Business adopting social media as a form of customer service and marketing often complain of the time required to pursue the natural creation of inbound links. This makes the business very receptive to vendors claiming to have solved the “social media time commitment problem” by automating responses to social networking and social media comments. To an engineer, doing so obviously misses the whole point of social media/social networking technology and is another form of spam. Additionally, the value of doing this on blogs and forums is next to nothing (due to the rel=”nofollow” attribute). Plus, the best links will generally come from partner websites and large-scale references in protected, reviewed publications such as journals and newspapers, which cannot be automatically generated for obvious reasons. In short: it’s pretty safe to consider automatic link generation a form of spam.
Email Spam. This is obviously a Bad Thing to do, but that doesn’t stop tons of vendors from doing it legally. The U.S. CAN-SPAM act does not require people to explicitly opt-in to be put on a mailing list, given they have some form of “relationship” with the company. Also, certain types of organizations–notably religious and political–may be exempt from some of these laws entirely.
Stupid Guarantees
A SEG company making a “#1 on Google in 24 hours!”-type claim is almost certainly using blackhat techniques and/or getting you prime placement for a term so long and specific to the point of being useless. For example, it shouldn’t be surprising that “acme tires phoenix arizona” would turn up the correct page first on a search engine, because:
the intent of the searcher is almost certainly to find this one specific business website, and
there are probably only a handful of resources on the web that match these terms well.
A search engine like Google might even return a map to the store in the first results page. Getting #1 placement for “tires arizona”, however, will be much more difficult since the search phrase will match many more web resources than the first, and, from the perspective of a small business owner, some of the competitors will have the time and money to put magnitudes more content online, and supplementing that content with marketing campaigns and PR.
Closing Thoughts
SEO/SEG is a technologically and ethically grey area, and vendors not defining clear boundaries of what they do for your money should generally be avoided.Do spend some effort making sure copy and syntax of website pages are thoroughly written, well-designed for usability and structured for search engine comprehension. But instead of paying a monthly service contract to an “SEO guy”, put that money into continued development of content that will please existing customers and help attract new ones. Pay attention to your placement in search engine results, sure, but at all times, stay focused on building value and meaningful business relationships over click-through rates and SERP rankings.
If I haven’t blabbed your ear off already, OpenRain has a small business web presence product called the Online Business Platform. It’s a big deal as it’s fairly unique in many different ways.
Anyway, the the upper service level options include consulting and advisement on advertising with Google AdWords, which means we generally need access to the clients AdWords account to monitor progress and such.
The problem is that AdWords accounts have the idiotic restriction of allowing a given email address to be tied to only one AdWords account. In other words, preston.lee@example.com can be granted access to OpenRain’s Adwords account, but not client accounts nor other side-project accounts. Google Analytics, on the other hand, allows for a single email address to manage multiple Analytics accounts in a much saner manner. Considering the amount of revenue Google generates from paid Internet marketing, accessing multiple AdWords accounts is a 7-layer stupidburger with extra retard sauce. I’m sure there’s a wonderful technical rationale that generates rainbows of technical applause, but as a user I couldn’t care less.
To answer the “How do I manage multiple AdWords accounts?” question, Google created My Client Center (MCC): essentially an AdWords account management aggregator. The kicker? To create an MCC account–and yes, it’s a separate account–you can’t use the email address for the account(s) you’re trying to aggregate. We ended creating a silly AdWordsIsStupid@example.com email group and use that email address to create the MCC, which is turn gets granted access to your different AdWords accounts that are, again, all tied to different email addresses.
So when I say AdWords account access is a cacophony of stupid, I mean it. N+1 email addresses required-level stupid.
If I haven’t blabbed your ear off about it already, OpenRain has a small business web presence product called the Online Business Platform. It’s a big deal as it’s fairly unique in many different ways.
Anyway, the upper service-level options include consulting and guidance on advertising with Google AdWords, which means we generally need access to the clients AdWords account to monitor progress and such.
The problem is that AdWords accounts have the idiotic restriction of allowing a given email address to be tied to only one AdWords account. In other words, preston.lee@example.com can be granted access to OpenRain’s Adwords account, but not client accounts nor other personal side-project accounts. (Google Analytics, on the other hand, allows for a single email address to manage multiple Analytics accounts in a much saner manner.) Considering the amount of revenue Google generates from paid Internet marketing, maintaining access to multiple AdWords accounts is a 7-layer stupidburger. I’m sure there’s a wonderful technical rationale that generates rainbows of technical applause, but as a user I couldn’t care less.
To answer the “How do I manage multiple AdWords accounts?” question, Google created My Client Center (MCC): essentially an AdWords account management aggregator part of an optional “Google Advertising Professionals” program. The kicker? To create an MCC account–and yes, it’s a separate account–you can’t use the email address for the account(s) you’re trying to aggregate. We ended creating a silly AdWordsIsStupid@example.com email group, and used that email address to access the MCC dashboard, which is turn gets granted access to your different AdWords accounts that are, again, all tied to different email addresses.
So when I say AdWords account access is a cacophony of stupid, I mean it. N+1 email addresses required-level stupid. Bad Google!
This morning I had the pleasure of doing a guest spot on Career Launch with Jane & Al: a VoiceAmerica Variety show airing live every Monday from 8-9am. Today’s discussion was on the role of social networking tools in the employment process.
Lots of capable people are finding themselves particularly hard pressed to find jobs right now, being in the midst of a recession and all, so if you’re interested in learning how social networking tools (LinkedIn specifically) can be incredibly beneficial for your career, check it out!