WO2012088307A1 - Method for customizing the display of descriptive information about media assets - Google Patents

Method for customizing the display of descriptive information about media assets Download PDF

Info

Publication number
WO2012088307A1
WO2012088307A1 PCT/US2011/066574 US2011066574W WO2012088307A1 WO 2012088307 A1 WO2012088307 A1 WO 2012088307A1 US 2011066574 W US2011066574 W US 2011066574W WO 2012088307 A1 WO2012088307 A1 WO 2012088307A1
Authority
WO
WIPO (PCT)
Prior art keywords
descriptive information
information
media asset
window
descriptive
Prior art date
Application number
PCT/US2011/066574
Other languages
French (fr)
Inventor
Vasil NADZAKOV
Hao Chi TRAN
Praneeth K. KONGARA
Andrew YOON
Roger Yeh
Basil BADAWIYEH
Dana Shawn FORTE
Lee Douglas Shartzer
Hauke BAHR
Phil SWICKARD
Original Assignee
Thomsom Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomsom Licensing filed Critical Thomsom Licensing
Priority to EP11850945.4A priority Critical patent/EP2656176A4/en
Priority to CN201180062511.0A priority patent/CN103270473B/en
Priority to KR1020137019254A priority patent/KR20140020852A/en
Priority to BR112013016163A priority patent/BR112013016163A2/en
Priority to JP2013546380A priority patent/JP6078476B2/en
Publication of WO2012088307A1 publication Critical patent/WO2012088307A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows

Definitions

  • the present disclosure generally relates to a user interface for displaying descriptive information about a media asset. More particularly, the present disclosure is for a method for customizing a user interface that displays descriptive information for a media asset.
  • a user can use a service such as an electronic program guide or a search engine to find descriptive information that describes such a media asset.
  • a service such as an electronic program guide or a search engine to find descriptive information that describes such a media asset.
  • the results of such a query yields descriptive information that is simply shown as text where the order of such results cannot be modified by a user. That is, descriptive information such as actor information, crew information, a director of a media asset, and the like are presented in the same order with the same type of information, regardless of the contents of a search query.
  • a method is presented with a user interface that provides a user to customize the presentation of descriptive information that corresponds to a media asset.
  • the customization includes the addition or removal of windows where such windows corresponds to a media asset descriptive information field and/or descriptive information media asset.
  • FIG. 1 is a block diagram of an exemplary system for delivering video content in accordance with the present disclosure
  • FIG. 2 is a block diagram of an exemplary set-top box/digital video recorder (DVR) as a media device in accordance with the present disclosure
  • DVR digital video recorder
  • FIG. 3 is a perspective view of an exemplary device in accordance with an embodiment of the present disclosure.
  • FIG. 4 illustrates an exemplary embodiment of the use of a gestures for a sensing controller or touch screen in accordance with the present disclosure
  • FIG. 5 presents a generic diagram of an exemplary application server 500 in accordance with the present disclosure
  • FIG. 6 illustrates an exemplary embodiment of a search user interface used for conducting a search in accordance with the present disclosure
  • FIG. 7 illustrates an exemplary embodiment of a user interface presenting search resulting from a search operation in accordance with the present disclosure
  • FIG. 8 illustrates an exemplary embodiment of a user interface presenting descriptive information about a media asset in accordance with the present disclosure
  • FIG. 9 illustrates an exemplary embodiment of a user interface presenting descriptive information about a media asset in accordance with the present disclosure
  • FIG. 10 illustrates an exemplary embodiment of a user interface presenting descriptive information about the cast associated with a media asset in accordance with the present disclosure
  • FIG. 11 illustrates an exemplary embodiment of a user interface presenting descriptive information about the video trailers associated with a media asset in accordance with the present disclosure
  • FIG. 12 illustrates an exemplary embodiment of a user interface for presenting descriptive information in accordance with the present disclosure
  • FIG. 13 illustrates an exemplary embodiment of a user interface for presenting descriptive information that can be modified by a user moving a component of the user interface in accordance with the present disclosure
  • FIG. 14 illustrates an exemplary embodiment of a user interface for presenting descriptive information that was modified by a user in accordance with the present disclosure
  • FIG. 15 illustrates an exemplary embodiment of a user interface for presenting descriptive information that can be modified by a user using a resizing operation in accordance with the present disclosure
  • FIG. 16 illustrates an exemplary embodiment of a user interface for presenting descriptive information that was modified by a user in accordance with the present disclosure
  • FIG. 17 illustrates an exemplary embodiment of a user interface for presenting descriptive information that can be modified by a user in accordance with the present disclosure
  • FIG. 18 illustrates an exemplary embodiment of a user interface for presenting descriptive information that can be modified by a user in accordance with the present disclosure
  • FIG. 19 illustrates an exemplary embodiment of a user interface for presenting descriptive information that can be modified by a user in accordance with the present disclosure
  • FIG. 20 presents an exemplary embodiment of a block diagram with various servers that can be designated for receiving descriptive information for a device
  • FIG. 21 illustrates a flowchart of a process for modifying a user interface in accordance with the present disclosure.
  • the present disclosure provides several different embodiments of a user interface that is used for displaying information about media assets such as videos, television shows, movies, audio, music, video games, and the like.
  • the in addition, such user interface embodiments can support operations for receiving, recording, playing back, purchasing, and the like
  • Such user interfaces can be implemented on devices such as a computer, set top box, media server, tablet, mobile phone, personal media, device, portable video game system, video game system, and so forth.
  • a descriptive information application can be a program that introduces more information about a media asset (such as a television show, movie, radio program, music, song, multimedia, game, and the like) than is typically relayed. For example, when a user watches a television show the user can calls up an electronic program guide that lists information such as the time and perhaps the cast of the show. Such information however lacks the depth of descriptive information media assets (images, video, audio, webpages, interactive applications, games, descriptive text, media purchase information, suggested media, and the like) which is provided with the descriptive information application discussed herein. That is, the present principles provide a scheme for providing additional media assets which are called descriptive information media assets than the typical text description that is provided when a user attempts to find out information about a media asset.
  • a media asset such as a television show, movie, radio program, music, song, multimedia, game, and the like
  • the content originates from a content source 102, such as a movie studio or production house.
  • the content may be supplied in at least one of two forms.
  • One form may be a broadcast form of content.
  • the broadcast content is provided to the broadcast affiliate manager 104, which is typically a national broadcast service, such as the American Broadcasting Company (ABC), National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), etc.
  • the broadcast affiliate manager may collect and store the content, and may schedule delivery of the content over a deliver network, shown as delivery network 1 (106).
  • Delivery network 1 (106) may include satellite link transmission from a national center to one or more regional or local centers.
  • Delivery network 1 (106) may also include local content delivery using local delivery systems such as over the air broadcast, satellite broadcast, cable broadcast or from an external network via IP.
  • the locally delivered content is provided to a user's set top box/digital video recorder (DVR) 108 in a user's home, where the content will subsequently be included in the body of available content that may be searched by the user.
  • DVR digital video recorder
  • Special content may include content delivered as premium viewing, pay-per-view, or other content otherwise not provided to the broadcast affiliate manager.
  • the special content may be content requested by the user.
  • the special content may be delivered to a content manager 110.
  • the content manager 110 may be a service provider, such as an Internet website, affiliated, for instance, with a content provider, broadcast service, or delivery network service.
  • the content manager 1 10 may also incorporate Internet content into the delivery system, or explicitly into the search only such that content may be searched that has not yet been delivered to the user's set top box/digital video recorder 108.
  • the content manager 1 10 may deliver the content to the user's set top box/digital video recorder 108 over a separate delivery network, delivery network 2 (1 12).
  • Delivery network 2 (112) may include high-speed broadband Internet type communications systems. It is important to note that the content from the broadcast affiliate manager 104 may also be delivered using all or parts of delivery network 2 (1 12) and content from the content manager 110 may be delivered using all or parts of Delivery network 1 (106).
  • the user may also obtain content directly from the Internet via delivery network 2 (1 12) without necessarily having the content managed by the content manager 110.
  • the scope of the search goes beyond available content to content that may be broadcast or made available in the future.
  • the set top box/digital video recorder 108 may receive different types of content from one or both of delivery network 1 and delivery network 2.
  • the set top box/digital video recorder 108 processes the content, and provides a separation of the content based on user preferences and commands.
  • the set top box/digital video recorder may also include a storage device, such as a hard drive or optical disk drive, for recording and playing back audio and video content. Further details of the operation of the set top box/digital video recorder 108 and features associated with playing back stored content will be described below in relation to FIG. 2.
  • the processed content is provided to a display device 1 14.
  • the display device 1 14 may be a conventional 2-D type display or may alternatively be an advanced 3-D display. It should be appreciated that other devices having display capabilities such as wireless phones, PDAs, computers, gaming platforms, remote controls, multi-media players, or the like, may employ the teachings of the present disclosure and are considered within the scope of the present disclosure.
  • Application server 116 is coupled to set top box 108 via delivery network 2 (112) where application server 1 16 can be configured to run at least one application that responds to various calls (requests) from set top box 108. That is, set top box 108 can operate as a client while application server 116 operates a server and can run various such applications such as a web server, database, and the like.
  • the calls can be implemented using a framework such as HTML, JAVA, an AJAX framework, ASP (ACTIVE SERVER PAGES), and the like where application server 116 responds to received calls or functions from set top box 108 as to deliver data.
  • the application server 1 16 can run off of a Representational State Transfer (REST) model or from a Simple Object Access Protocol (SOAP).
  • REST Representational State Transfer
  • SOAP Simple Object Access Protocol
  • application server 1 16 can be implemented using a Glassfish based server using a Jersey based Application Program Interface which receives various calls from set top box 108, although alternative servers and APIs
  • a second model can also be used were application server 1 16 and set top box 108 are configured to operate in a peer-to-peer environment where applications are run in a decentralized manner.
  • FIG.2 a block diagram of an embodiment of the core of a set top box/digital video recorder 200 is shown.
  • the device 200 shown may also be incorporated into other systems including display device 114.
  • device 200 can be implemented where display device 214 is a touch screen device so that the display 214 can be used for inputting commands or gestures for controlling the operation of device 200.
  • display device 214 is a touch screen device so that the display 214 can be used for inputting commands or gestures for controlling the operation of device 200.
  • several components necessary for complete operation of the system are not shown in the interest of conciseness, as they are well known to those skilled in the art.
  • the content is received in an input signal receiver 202.
  • the input signal receiver 202 may be one of several known receiver circuits used for receiving, demodulation, and decoding signals provided over one of the several possible networks including over the air, cable, satellite, Ethernet, fiber and phone line networks.
  • the desired input signal may be selected and retrieved in the input signal receiver 202 based on user input provided through a control interface (not shown).
  • the decoded output signal is provided to an input stream processor 204.
  • the input stream processor 204 performs the final signal selection and processing, and includes separation of video content from audio content for the content stream.
  • the audio content is provided to an audio processor 206 for conversion from the received format, such as compressed digital signal, to an analog waveform signal.
  • the analog waveform signal is provided to an audio interface 208 and further to the display device 1 14 or an audio amplifier (not shown).
  • the audio interface 208 may provide a digital signal to an audio output device or display device using a High-Definition Multimedia Interface (HDMI) cable or alternate audio interface such as via a Sony/Philips Digital Interconnect Format (SPDIF).
  • HDMI High-Definition Multimedia Interface
  • SPDIF Sony/Philips Digital Interconnect Format
  • the audio processor 206 also performs any necessary conversion for the storage of the audio signals.
  • the video output from the input stream processor 204 is provided to a video processor 210.
  • the video signal may be one of several formats.
  • the video processor 210 provides, as necessary a conversion of the video content, based on the input signal format.
  • the video processor 210 also performs any necessary conversion for the storage of the video signals.
  • a storage device 212 stores audio and video content received at the input.
  • the storage device 212 allows later retrieval and playback of the content under the control of a controller 214 and also based on commands, e.g., navigation instructions such as fast-forward (FF) and rewind (Rew), received from a user interface 216.
  • the storage device 212 may be a hard disk drive, one or more large capacity integrated electronic memories, such as static random access memory, or dynamic random access memory, or may be an interchangeable optical disk storage system such as a compact disk drive or digital video disk drive. In one embodiment, the storage device 212 may be external and not be present in the system.
  • the converted video signal from the video processor 210, either originating from the input or from the storage device 212, is provided to the display interface 218.
  • the display interface 218 further provides the display signal to a display device of the type described above.
  • the display interface 218 may be an analog signal interface such as red-green-blue (RGB) or may be a digital interface such as high definition multimedia interface (HDMI). It is to be appreciated that the display interface 218 will generate the various screens for presenting the search results in a three dimensional array as will be described in more detail below.
  • the controller 214 is interconnected via a bus to several of the components of the device 200, including the input stream processor 202, audio processor 206, video processor 210, storage device 212, and a user interface 216.
  • the controller 214 manages the conversion process for converting the input stream signal into a signal for storage on the storage device or for display.
  • the controller 214 also manages the retrieval and playback of stored content/media assets and the communications with remote servers to playback content/media assets. Furthermore, as will be described below, the controller 214 performs searching of content, either stored or to be delivered via the delivery networks described above.
  • the controller 214 is further coupled to control memory 220 (e.g., volatile or non-volatile memory, including random access memory, static RAM, dynamic RAM, read only memory, programmable ROM, flash memory, EPROM, EEPROM, etc.) for storing information and instruction code for controller 214.
  • control memory 220 e.g., volatile or non-volatile memory, including random access memory, static RAM, dynamic RAM, read only memory, programmable ROM, flash memory, EPROM, EEPROM, etc.
  • the implementation of the memory may include several possible embodiments, such as a single memory device or, alternatively, more than one memory circuit connected together to form a shared or common memory. Still further, the memory may be included with other circuitry, such as portions of bus communications circuitry, in a larger circuit.
  • the user interface 216 of the present disclosure employs an input device that moves a cursor around the display, which in turn causes the content to enlarge as the cursor passes over it.
  • the input device is a remote controller, with a form of motion detection, such as a gyroscope or accelerometer, which allows the user to move a cursor freely about a screen or display.
  • the input device is a controller in the form of touch pad or touch sensitive device that will track the user's movement on the pad, on the screen.
  • the input device could be a traditional remote control with direction buttons.
  • the user interface process of the present disclosure employs an input device that can be used to express functions, such as fast forward, rewind, etc.
  • a tablet or touch panel device 300 (which is the same as the touch screen device 1 16 shown in FIG.l and/or is an integrated example of media device 108 and touch screen device 116) may be interfaced via the user interface 216 and/or touch panel interface 222 of the receiving device 200.
  • the touch panel device 300 allows operation of the receiving device or set top box based on hand movements, or gestures, and actions translated through the panel into commands for the set top box or other control device.
  • the touch panel 300 may simply serve as a navigational tool to navigate the grid display.
  • the touch panel 300 will additionally serve as the display device allowing the user to more directly interact with the navigation through the grid display of content.
  • the touch panel device may be included as part of a remote control device containing more conventional control functions such as activator buttons.
  • the touch panel 300 can also includes at least one camera element. As described in further detail below, content displayed on the touch panel device 300 may be zapped or thrown to the main screen (e.g., display device 114 shown in FIG. 1).
  • FIG. 4 the use of a gesture sensing controller or touch screen, such as shown, provides for a number of types of user interaction.
  • the inputs from the controller are used to define gestures and the gestures, in turn, define specific contextual commands.
  • the configuration of the sensors may permit defining movement of a user's fingers on a touch screen or may even permit defining the movement of the controller itself in either one dimension or two dimensions, two-dimensional motion, such as a diagonal, and a combination of yaw, pitch and roll can be used to define any three-dimensional motion, such as a swing.
  • a number of gestures are illustrated in FIG. 4. Gestures are interpreted in context and are identified by defined movements made by the user.
  • Bumping 420 is defined by a two-stroke drawing indicating pointing in one direction, either up, down, left or right.
  • the bumping gesture is associated with specific commands in context. For example, in a TimeShifting mode, a left-bump gesture 420 indicates rewinding, and a right-bump gesture indicates fast-forwarding. In other contexts, a bump gesture 420 is interpreted to increment a particular value in the direction designated by the bump.
  • Checking 440 is defined as in drawing a checkmark. It is similar to a downward bump gesture 420. Checking is identified in context to designate a reminder, user tag or to select an item or element. Circling 440 is defined as drawing a circle in either direction. It is possible that both directions could be distinguished.
  • Dragging 450 is defined as an angular movement of the controller (a change in pitch and/or yaw) while pressing a button (virtual or physical) on the tablet 300 (i.e., a "trigger drag").
  • the dragging gesture 450 may be used for navigation, speed, distance, time- shifting, rewinding, and forwarding.
  • Dragging 450 can be used to move a cursor, a virtual cursor, or a change of state, such as highlighting outlining or selecting on the display. Dragging 450 can be in any direction and is generally used to navigate in two dimensions. However, in certain interfaces, it is preferred to modify the response to the dragging command.
  • Nodding 460 is defined by two fast trigger-drag up-and-down vertical movements. Nodding 460 is used to indicate “Yes” or “Accept.”
  • X-ing 470 is defined as in drawing the letter “X.” X-ing 470 is used for "Delete” or “Block” commands.
  • Wagging 480 is defined by two trigger-drag fast back-and-forth horizontal movements. The wagging gesture 480 is used to indicate "No" or "Cancel.”
  • a simple right or left movement on the sensor as shown here may produce a fast forward or rewind function.
  • multiple sensors could be included and placed at different locations on the touch screen. For instance, a horizontal sensor for left and right movement may be placed in one spot and used for volume up and down, while a vertical sensor for up and down movement may be place in a different spot and used for channel up/down. In this way specific gesture mappings may be used.
  • a two finger swipe gesture may be utilized to initiate the throwing or moving of content from the tablet 300 to the main screen or display device 1 14.
  • Fig. 5 presents a generic diagram of an exemplary application server 500 in accordance with the present disclosure that can be used with delivery networks 106, 112 to provide services to devices on such networks.
  • Application server 500 can contain a storage 510 which is used to store information such as applications, data for such applications, database information, web page information, media data, and the like.
  • Storage 510 can be implemented as one or more hard drives, a RAID system, volatile memory, non-volatile memory, disc based storage, solid state storage, reel to reel tape, and the like.
  • Operating system 520 is the underlying system that is used to operate server 500. Examples of operating systems that can be used are Windows, Solaris, Unix, Linux, MacOS, and the like.
  • Applications 530 are applications such as a web server, file server, application licensing, search engine, authentication, telenet, e-mail, file transfer, media delivery, media asset information program, program code execution, and the like that can be run on application server 500.
  • Communication interface 540 provides the interface for server 500 to communicate with devices on network 106, 112. Examples of communication interfaces are Ethernet, Satellite Interface, Fiber Connection, Tl , T2, T3, Coaxial Cable, and the like, where such communications are preferably packet based, although other systems of data delivery can be used.
  • FIG. 6 represents an exemplary embodiment of a search user interface 600 to find out information about different media assets.
  • user interface 600 is used as a front end so that a search query is typed into area 620, whereby the entered search query is delivered to a search engine for search results, as known in the art.
  • a query can be made by a device to a remote search engine for search results such a GOOGLE, YAHOO, BING and the like, a search can be conducted for search results in a local storage device, multiple search engines can be queried at the same time for search results, and other techniques can be used in accordance with the present description.
  • User interface 600 presents a display area 610 that is viewable on a display device, display screen, and the like.
  • a search function can be called up by a user activating the search control 605, whereby the results of such an action brings up search user interface 600.
  • Area 620 is used to enter in a search criteria by using the characters shown in keyboard 630.
  • Button 640 can be activated to bring up other searches made by other users that are popular where the search information can be obtained from a remote server.
  • Button 645 when activated, brings up searches that were previously made by a user.
  • FIG. 7 illustrates an exemplary embodiment of user interface 700 presenting search results. That is, search results 650, 653, and 655 present different versions of the movie "How to Train Your Dragon" where 650 comports to a regular version of the movie, 653 corresponds to a 3D version, and 655 relates to an IMAX version of the movie.
  • a movie can be selected by activating a play button 659 that is next to a search result.
  • play button 659 will be generated if a user has already purchased a media asset, has access to a media asset through a subscription, has a version of a media asset accessible through local or remote storage, and the like.
  • a media asset can be purchased if a user activates an option such as options 657.
  • descriptive information about a media asset can be accessed by activating option 657, as well.
  • FIG. 8 illustrates an exemplary embodiment of user interface 800 presenting descriptive information about a media asset in accordance with the present disclosure.
  • descriptive information can be poster art, synopsis of the asset, director, actor, studio, television information, movie information, music information, music artists, artists, producers, director of photography, screenplay authors, executive producers, producers, genre, date of release, and the like.
  • Such descriptive information can be retrieved from sources as Baseline, Tribune Media Services (TMS), Internet Movie Database (IMDB), Internet Video Archive (IV A), locally from a device, electronic program guide information, rating, runtime, release data, and the like can also be accessed to produce such information as well.
  • TMS Tribune Media Services
  • IMDB Internet Movie Database
  • IV A Internet Video Archive
  • Descriptive information can also be classified by media asset descriptive information fields including categories such as synopsis of the asset, director, actor, studio, television information, movie information, music information, music artists, artists, producers, director of photography, screenplay authors, executive producers, producers, genre, date of release, and the like.
  • Descriptive information media assets are video, audios, pictures, accompanying text, computer programs, and the like that can be called up as part of descriptive information when one seeks information about a media asset. For example, if a user is searching out descriptive information for a movie as a media asset, pictures of the actors in the movie, trailers of the movie, sound from an interview with a director from the movie, and the like all represent different forms of descriptive information media assets.
  • some descriptive information includes poster art 610 and descriptive information text 660 which contains various descriptive information fields including asset title, crew, synopsis of the asset, rating, runtime, studio, and date of release, among other fields.
  • display area 610 sometimes only presents part of the information that is available, where user interface 800 presents additional information about the movie asset's director which can be accessed by scrolling display area 610 down. That is, it is expected that descriptive information for a media asset will sometimes be larger than shown in a display area 610, where such information can be located by instituting a scroll operation in an appropriate direction by instituting a gesture, control input, command, and the like.
  • a video trailer of a media asset can be accessed by activating play trailer option 662 which will cause the playback of a trailer on a main screen, secondary screen, and the like.
  • FIG. 9 presents an exemplary embodiment of user interface 900 presenting descriptive information about a media asset in accordance with the present disclosure.
  • user interface 900 presents pictures in a display area 910 with descriptive text for corresponding media asset descriptive fields.
  • window 925 shows that an interactive application can be called up for the selected media asset.
  • window 930 indicates that there is additional information about a media asset where such information can be reviews of a media asset by critics, user comments about a media asset, a screenplay of a media asset, and the like.
  • Window 940 shows cast information, crew information is accessible through window 950, images from a media asset can be accessed through window 960, and videos of a media asset can be accessed through the activation of window 970.
  • a graphic background such as 980 can be called up in which windows such as 925, 930, 940, 950, 960, 970, and 980 are placed in the foreground of display area 910. It is expected that other windows for other descriptive information fields can be used in accordance with the present description.
  • TABLE 1 lists that for a rendered window, there is corresponding position and size information.
  • the position information in this example, is presented as x and y coordinates where the "x" is measured from the top left part of a display area, as shown in FIG. 9, where the "x" value should be positive and indicates a movement towards the right in accordance with a specified unit (pixels, inches, centimeters, and the like).
  • the "y” value is measured from the top left part of a display area, as shown in FIG. 9, where the "y” value should be positive and is a movement down in accordance with a specified unit (pixels, inches, centimeters and the like).
  • the width value "w” is the size of a window in the x direction which begins at a specified "x" coordinate and moves towards the left in a specified unit (for example, a window is 100 pixels wide, .5 inches wide, 3.23 cm wide, and the like).
  • the height value is the size of a window in the y direction which begins at a specified "y" coordinate for a window and moves down in a specified unit (for example, a window is 400 pixels long, .75 inches high, 8 cm high, and the like). It is noted, that the construction of a window can take other forms and use other units.
  • the background field indicates that a background can be called from a specific server or the term 'none' is used when no background exists.
  • the field called Media Asset Descriptive Field indicates a field of descriptive information that is accessible by the activation of a corresponding window.
  • Media type indicates what is to be shown in a window that corresponds to a descriptive information field which can be a video, audio, picture, page, computer program, text, combination thereof, and the like. That is, when a window is activated, a corresponding descriptive information media asset which corresponds to a media asset can be accessed. For example, if a media asset has a corresponding descriptive media asset that is a trailer in the form of a video, such a trailer can be called and activated for playback if a corresponding window is activated.
  • the server location of a corresponding descriptive media asset can be specified in this field.
  • a ranking of sources can be that a descriptive field media asset is first to come from IVA, then TMS, and then a local server.
  • This prioritization of servers for information can be designated by a user, service provider, and the like by entering in relevant information. Other examples of this prioritization of servers can be implemented in accordance with the present description.
  • page will indicate that the presence of an additional page as shown for FIGS. 10 and 11 as described below.
  • page with another asset type "picture” in the media type field would indicate that a picture should be shown for a corresponding window, but a page of more windows is typically generated when the window is activated.
  • page with "video” would have a video playing in a window whereby the referenced page is generated when the corresponding window is activated.
  • An example of this approach for a movie trailer would present a query "(movie trailer) and (asset title)" to a search engine at the server. If a search result is available from the search query, the search engine will return a link to a video asset that corresponds to a movie trailer for the asset title.
  • This scheme can be applied for obtaining video, audio, pictures, text, applications, and the like.
  • a second server can be queried in accordance with the priority information indicated in a table and the like.
  • Other techniques for receiving descriptive information media assets that correspond to media assets can be implemented in accordance with the described principles.
  • FIG. 10 illustrates an exemplary embodiment of a user interface 1000 presenting descriptive information and descriptive information media assets that are associated with a media asset in accordance with the present disclosure.
  • windows 1020, 1025, 1030, 1035, 1040, 1045, 1050, and 1055 correspond to different actors that are associated with the descriptive field "cast".
  • the activation of any of these windows can bring up an additional page of descriptive information about a specific actor.
  • Such a process can be continued indefinitely in accordance with the present description.
  • a descriptive information media asset can also played back when such a window is activated, as well.
  • FIG. 10 A further example of how FIG. 10 can be rendered is shown in TABLE II and whereby such information can be rendered in accordance with the present description.
  • Window Position Size Descriptive Media Source of Background is shown in TABLE II and whereby such information can be rendered in accordance with the present description.
  • FIG. 11 illustrates an exemplary embodiment of user interface 1100 presenting video trailers as descriptive information media assets that are associated with a media asset in accordance with the present disclosure.
  • display area 1 120 displays a number of screenshots for video trailers 1120, 1 125, 1130, 1135, 1140, 1145, and 1150 which can be playback by activating any of the corresponding windows for such video trailers.
  • a video trailer can be accessed from a specified server for playback on a device in accordance with the present principles.
  • FIG. 12 illustrates an exemplary embodiment of a user interface 1200 for presenting descriptive information in accordance with the present disclosure.
  • a display area 1210 For a display area 1210, several windows 1220, 1230, 1240, 1250, 1260, 1270, 1280, and 1290 are presented. Notably, part of windows 1250, 1260, and 1290 reside off screen of display area 1210, where a scrolling operation can be employed to completely bring such windows into view. As stated earlier, each window has a corresponding descriptive information field or descriptive information media asset that is callable using the present principles.
  • the elements of FIG. 12 can be customized by a user where the various windows can be moved around, re-sized, added, or deleted from such a display area using gesture, control input commands, dragging and dropping, and the like. That is if a user wants to add a new window to a user interface for "crew" information, such an addition changes the attributes of a configuration file, table, database, and the like.
  • FIG. 13 illustrates an exemplary embodiment of a user interface 1300 for presenting descriptive information that can be modified by a user moving a component of the user interface in accordance with the present disclosure. For example, if a user were to move a window 1230 to the position at 1235, the corresponding "x" and "y" coordinates for the window change.
  • FIG. 14 illustrates an exemplary embodiment of a user interface 1500 for presenting descriptive information that can be modified by a user using a resizing operation in accordance with the present disclosure.
  • a window 1230 is resized to occupy the area 1235.
  • Such a resizing operation for scaling the contents of a video window is known in the art.
  • the results of the resizing operation will change the corresponding "w” and "h” for a window where such changes are stored in a relevant configuration file, table, database, and the like.
  • User interface 1600 displaying the results of the resizing operation is shown in FIG. 16 in accordance with the present disclosure.
  • FIG. 17 illustrates an exemplary embodiment of a user interface 1700 for presenting descriptive information that can be modified by a user in accordance with the present disclosure.
  • a modified version of FIG. 12 is shown which contains a fewer of elements from user interface 1200.
  • windows 1230, 1240, 1250, 1260, 1270, and 1290 have been rearranged in a manner as described for the present principles.
  • windows 1220 and 1280 have been removed from user interface 1200 to yield user interface 1700.
  • FIG. 18 illustrates an exemplary embodiment of a user interface 1800 for presenting descriptive information that can be modified by a user in accordance with the present disclosure.
  • windows 1230, 1240, 1250 and 1260 are grouped together.
  • Window 1290 has been moved to another area of display area 1210. It is noted, that window 1270 is deleted from this configuration.
  • FIG. 19 illustrates an alternative arrangement of windows 1230, 1240, 1250, 1260, and 1290 for a user interface 1900 in accordance with the present disclosure.
  • FIG. 20 presents an exemplary embodiment of a block diagram 2000 with various servers that can be designated for receiving descriptive information media assets for a device.
  • servers 2010, 2020, and 2030 represent servers such as IVA, TMS, and the like.
  • Server 2040 is a server operated by a network service provider 2050 where the service provider provides network service to a device 2060.
  • a service provider 2050 may want their corresponding server 2040 to have a higher priority than other servers 2010, 2020, and 2030 because service provider 2050 can derive an economic benefit via advertising or attempting to sell a media asset through their server.
  • network service provider 2050 in response to a query for descriptive information for a media asset, can present a trailer (as a descriptive information field media asset) which allows a user device 2060 to request the media asset from the service provider 2050 through a video on demand service, This prevents outside services such as Amazon, Itunes, Netflix, IVA, IMDB, and the like from upselling a user through their information services.
  • a local storage device 2070 coupled to a user's device 2060 can have descriptive information and descriptive information media assets available, as well. Such priority information can be designated in a table, database, configuration file, and the like.
  • Media assets and their corresponding descriptive information media assets can be called by using an application server 118, where various sources (files, components of media files, and the like) are constructed where a media asset will have a resource for each category that can be generated.
  • a “cast” resource will correspond to the "cast and crew” information that is shown.
  • videos under the "content video” resource can be implemented with text, video, pictures, JAVA code, HTML code, and the like.
  • This service takes a JSON request with contentld parameter, and returns JSON response with the IVA (Internet Video Archive) trailer URLs ingested into the database during the import process.
  • IVA Internet Video Archive
  • This service takes a JSON request with contentld parameter, and returns JSON response with the corresponding Image URLs returned from the database or the JSON response available in the JSON response file in the following location. ⁇ opt ⁇ Spectrum ⁇ catalogs ⁇ backstage ⁇ contentimages
  • This service takes a JSON request with contentld parameter, and returns JSON response with the corresponding Image URLs returned from the database or the JSON response available in the JSON response file in the following location.
  • This service consumes a JSON request with personld parameter, and returns JSON response by reading the JSON response file from the following location.
  • This service consumes a JSON request with personld parameter, and returns JSON response by reading the JSON response file from the following location.
  • This service consumes a JSON request with contentld parameter, and returns JSON response by reading the JSON response file from the following location. ⁇ opt ⁇ Spectrum ⁇ catalogs ⁇ backstage ⁇ homepage
  • ackstage/content notes/ ⁇ conten om/TDCD/spectrurn/Engin tld ⁇ eering/Backend/DetailedDe sign/Catalog/BackstageRes ponse/5 MV96e 1 ffi2670a89 bcde523f427a8f2061a- contentnotes.ison
  • the creation of the resources described above can come from a variety of sources and data collection methods.
  • a homepage such as presented in FIG. 12 is generated in response to a user requesting more information about a media asset.
  • the homepage ideally displays a number of descriptive information fields for a user to select from.
  • the generation of the homepage is from various resources. For example, when producing the home page, the images that are used to populate the homepage come from Baseline.
  • an image previewer application or slideshow can be activated which shows the selected image and other images that are related to a selected media asset which are also in the "image" category.
  • a background image from a database can also be used for a homepage, although any other background graphic can be made available. It is also considered that the vendors of various images and information that are used to construct a homepage and the underlying pages beneath can be replaced depending on the requirements of a network service provider who implements the described disclosure.
  • resources as descriptive information media assets such as movie trailers, actor interviews, crew interviews, documentaries, commercials about the media asset, and the like are available for playback using a program such as FLASH,
  • Such "video” resources can come from a database such as IVA which contains the videos, video thumbnails, title, and description information, although other databases can be used to retrieve such information.
  • IVA which contains the videos, video thumbnails, title, and description information, although other databases can be used to retrieve such information.
  • each member of the cast is presented with image, actor name, and a corresponding character name, if such information is available. For example, if an image is available for an actor, the image, actor name, and character name is displayed. If an image is not available, the actor name and character name is shown (if such information is available) otherwise, the application will use an alternative or default image.
  • an example of the information which is available for the actor such as the actor's date of birth, date of death, and other biographical information can be displayed if such information is available from a database such as Baseline, IMDB, and the like.
  • a database such as Baseline, IMDB, and the like.
  • 4 videos and 5 images are displayed for a particular actor, but the number of images and video can be changed based on the requirements of a network service provider, user, and the like in accordance with the present principles.
  • the selection of videos and images is described above.
  • the source of videos or images can come as the results from a search from a service such as YouTube, Google, and the like, or videos can be pre-populated due to a manual operation.
  • each member of the crew When the "crew" information is accessed, each member of the crew will have their name listed, with crew role, and the image of the crew member. If an image of a crew member is not available, a default image will be displayed. The selection of a particular crew member will bring up a biography page of the crew member when such information is available.
  • an "additional notes" category can be implemented using information for a media asset such as country of origin of the media asset, filming locations, box office gross, total sales of media (such as digital downloads, DVDs, Blu-Rays, streaming media, and the like), awards, sound track information, information how to buy the soundtrack or songs used for the media asset from a source like AMAZON, ITUNES, and the like. If such information is not available, the additional notes category should not be displayed.
  • FIG. 21 is a flowchart of a process 2100 for modifying a user interface in accordance with the present disclosure.
  • a user interface is generated in response to a user command where the user interface should contain multiple windows. As described before, this user interface is generated because a user requests additional descriptive information about a media asset.
  • descriptive information is presented being either a descriptive information media asset which is related to the media asset or information pertaining to a media asset descriptive information field is presented. That is the activation of a window either displays a different descriptive information media asset (a first trailer, a second trailer, an audio interview, etc.,) and/or a media asset descriptive information field that generates an additional page of information as described in accordance with present disclosure.
  • the generation of the user interface can be determined by a configuration file, table, database, local storage, remote storage, and the like which indicates how many windows are to be generated, the position of such windows, the location of such windows, and whether such windows correspond to a descriptive information media asset and/or a media asset descriptive information field.
  • the configuration information can also be referenced to indicate the source from which descriptive information should be retrieved from where such sources include remote servers and local storage.
  • a user can modify the presentation of such windows in step 2120, where modifications can be the resizing of a window in step 2130, a changing in the position of a window in step 2140, an addition or a removal of a window in step 2150, and the like.
  • the location where descriptive information can be retrieved can be prioritized by using a hierarchical listing of servers. Such a step can take place when a service provider wants to use their own servers being used instead of using other servers.
  • step 2160 whatever modifications that are made to a user interface are stored in memory which includes information containing the change in window location, change in window size, the addition or removal of windows, and the like.
  • this step includes that the presentation of descriptive information can change because of the addition or removal of a window.
  • the addition of a window which corresponds to "cast” will have a window for the media asset descriptive information field "cast” generated in the user interface next time a user requests descriptive information for a media asset.
  • the addition of another window can up the number of trailers shown in a display area. The removal of window would therefore eliminate a corresponding descriptive information field or descriptive media asset from the display of the user interface next time a user requests descriptive information for a media asset.
  • the subsequent presentation of what descriptive information is presented is changed by such modifications.
  • Other modifications can be implemented in accordance with the present principles.
  • FIGS may be implemented in various forms of hardware, software or combinations thereof. Preferably, these elements are implemented in a combination of hardware and software on one or more appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces.
  • the computer readable media and code written on can be implemented in a transitory state (signal) and a non- transitory state (e.g., on a tangible medium such as CD-ROM, DVD, Blu-Ray, Hard Drive, flash card, or other type of tangible storage medium).
  • a transitory state signal
  • a non- transitory state e.g., on a tangible medium such as CD-ROM, DVD, Blu-Ray, Hard Drive, flash card, or other type of tangible storage medium.
  • the functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software.
  • the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared.
  • processor or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read only memory (“ROM”) for storing software, random access memory (“RAM”), and nonvolatile storage.
  • DSP digital signal processor
  • ROM read only memory
  • RAM random access memory
  • any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.

Abstract

The present disclosure is directed towards a user interface that displays a number of windows where each window when activated presents descriptive information about a media asset. The descriptive information itself can either be a further media asset (video, audio, text, picture, computer program) or can be a descriptive information field (synopsis of the asset, director, actor, studio, television information, movie information, music information, music artists, artists, producers, director of photography, screenplay authors, executive producers, producers, genre, date of release). The arrangement of the windows can be changed so that the subsequent display of descriptive information is affected by such a change. Typically, the change would add or delete the presentation of a descriptive information field or the further asset as a descriptive information media asset.

Description

METHOD FOR CUSTOMIZING THE DISPLAY OF DESCRIPTIVE
INFORMATION ABOUT MEDIA ASSETS
CROSS REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Application Serial No. 61/426,137 filed December 22, 2010, which is incorporated by reference herein in its entirety.
TECHNICAL FIELD
The present disclosure generally relates to a user interface for displaying descriptive information about a media asset. More particularly, the present disclosure is for a method for customizing a user interface that displays descriptive information for a media asset.
BACKGROUND OF THE INVENTION
When searching for a media asset (video, audio, computer program, text, and the like), a user can use a service such as an electronic program guide or a search engine to find descriptive information that describes such a media asset. Typically, the results of such a query yields descriptive information that is simply shown as text where the order of such results cannot be modified by a user. That is, descriptive information such as actor information, crew information, a director of a media asset, and the like are presented in the same order with the same type of information, regardless of the contents of a search query.
SUMMARY OF THE DISCLOSURE
A method is presented with a user interface that provides a user to customize the presentation of descriptive information that corresponds to a media asset. The customization includes the addition or removal of windows where such windows corresponds to a media asset descriptive information field and/or descriptive information media asset.
BRIEF DESCRIPTION OF THE DRAWINGS
These, and other aspects, features and advantages of the present disclosure will be described or become apparent from the following detailed description of the preferred embodiments, which is to be read in connection with the accompanying drawings. In the drawings, wherein like reference numerals denote similar elements throughout the views: FIG. 1 is a block diagram of an exemplary system for delivering video content in accordance with the present disclosure;
FIG. 2 is a block diagram of an exemplary set-top box/digital video recorder (DVR) as a media device in accordance with the present disclosure;
FIG. 3 is a perspective view of an exemplary device in accordance with an embodiment of the present disclosure;
FIG. 4 illustrates an exemplary embodiment of the use of a gestures for a sensing controller or touch screen in accordance with the present disclosure;
FIG. 5 presents a generic diagram of an exemplary application server 500 in accordance with the present disclosure; FIG. 6 illustrates an exemplary embodiment of a search user interface used for conducting a search in accordance with the present disclosure;
FIG. 7 illustrates an exemplary embodiment of a user interface presenting search resulting from a search operation in accordance with the present disclosure;
FIG. 8 illustrates an exemplary embodiment of a user interface presenting descriptive information about a media asset in accordance with the present disclosure;
FIG. 9 illustrates an exemplary embodiment of a user interface presenting descriptive information about a media asset in accordance with the present disclosure; FIG. 10 illustrates an exemplary embodiment of a user interface presenting descriptive information about the cast associated with a media asset in accordance with the present disclosure;
FIG. 11 illustrates an exemplary embodiment of a user interface presenting descriptive information about the video trailers associated with a media asset in accordance with the present disclosure; FIG. 12 illustrates an exemplary embodiment of a user interface for presenting descriptive information in accordance with the present disclosure;
FIG. 13 illustrates an exemplary embodiment of a user interface for presenting descriptive information that can be modified by a user moving a component of the user interface in accordance with the present disclosure;
FIG. 14 illustrates an exemplary embodiment of a user interface for presenting descriptive information that was modified by a user in accordance with the present disclosure; FIG. 15 illustrates an exemplary embodiment of a user interface for presenting descriptive information that can be modified by a user using a resizing operation in accordance with the present disclosure;
FIG. 16 illustrates an exemplary embodiment of a user interface for presenting descriptive information that was modified by a user in accordance with the present disclosure;
FIG. 17 illustrates an exemplary embodiment of a user interface for presenting descriptive information that can be modified by a user in accordance with the present disclosure; FIG. 18 illustrates an exemplary embodiment of a user interface for presenting descriptive information that can be modified by a user in accordance with the present disclosure;
FIG. 19 illustrates an exemplary embodiment of a user interface for presenting descriptive information that can be modified by a user in accordance with the present disclosure;
FIG. 20 presents an exemplary embodiment of a block diagram with various servers that can be designated for receiving descriptive information for a device; and
FIG. 21 illustrates a flowchart of a process for modifying a user interface in accordance with the present disclosure.
DETAILED DESCRIPTION
The present disclosure provides several different embodiments of a user interface that is used for displaying information about media assets such as videos, television shows, movies, audio, music, video games, and the like. The in addition, such user interface embodiments can support operations for receiving, recording, playing back, purchasing, and the like Such user interfaces can be implemented on devices such as a computer, set top box, media server, tablet, mobile phone, personal media, device, portable video game system, video game system, and so forth. The user
A descriptive information application can be a program that introduces more information about a media asset (such as a television show, movie, radio program, music, song, multimedia, game, and the like) than is typically relayed. For example, when a user watches a television show the user can calls up an electronic program guide that lists information such as the time and perhaps the cast of the show. Such information however lacks the depth of descriptive information media assets (images, video, audio, webpages, interactive applications, games, descriptive text, media purchase information, suggested media, and the like) which is provided with the descriptive information application discussed herein. That is, the present principles provide a scheme for providing additional media assets which are called descriptive information media assets than the typical text description that is provided when a user attempts to find out information about a media asset.
Turning now to FIG. 1, a block diagram of an embodiment of a system 100 for delivering content to a home or end user is shown. The content originates from a content source 102, such as a movie studio or production house. The content may be supplied in at least one of two forms. One form may be a broadcast form of content. The broadcast content is provided to the broadcast affiliate manager 104, which is typically a national broadcast service, such as the American Broadcasting Company (ABC), National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), etc. The broadcast affiliate manager may collect and store the content, and may schedule delivery of the content over a deliver network, shown as delivery network 1 (106). Delivery network 1 (106) may include satellite link transmission from a national center to one or more regional or local centers. Delivery network 1 (106) may also include local content delivery using local delivery systems such as over the air broadcast, satellite broadcast, cable broadcast or from an external network via IP. The locally delivered content is provided to a user's set top box/digital video recorder (DVR) 108 in a user's home, where the content will subsequently be included in the body of available content that may be searched by the user.
A second form of content is referred to as special content. Special content may include content delivered as premium viewing, pay-per-view, or other content otherwise not provided to the broadcast affiliate manager. In many cases, the special content may be content requested by the user. The special content may be delivered to a content manager 110. The content manager 110 may be a service provider, such as an Internet website, affiliated, for instance, with a content provider, broadcast service, or delivery network service. The content manager 1 10 may also incorporate Internet content into the delivery system, or explicitly into the search only such that content may be searched that has not yet been delivered to the user's set top box/digital video recorder 108. The content manager 1 10 may deliver the content to the user's set top box/digital video recorder 108 over a separate delivery network, delivery network 2 (1 12). Delivery network 2 (112) may include high-speed broadband Internet type communications systems. It is important to note that the content from the broadcast affiliate manager 104 may also be delivered using all or parts of delivery network 2 (1 12) and content from the content manager 110 may be delivered using all or parts of Delivery network 1 (106). In addition, the user may also obtain content directly from the Internet via delivery network 2 (1 12) without necessarily having the content managed by the content manager 110. In addition, the scope of the search goes beyond available content to content that may be broadcast or made available in the future.
The set top box/digital video recorder 108 may receive different types of content from one or both of delivery network 1 and delivery network 2. The set top box/digital video recorder 108 processes the content, and provides a separation of the content based on user preferences and commands. The set top box/digital video recorder may also include a storage device, such as a hard drive or optical disk drive, for recording and playing back audio and video content. Further details of the operation of the set top box/digital video recorder 108 and features associated with playing back stored content will be described below in relation to FIG. 2. The processed content is provided to a display device 1 14. The display device 1 14 may be a conventional 2-D type display or may alternatively be an advanced 3-D display. It should be appreciated that other devices having display capabilities such as wireless phones, PDAs, computers, gaming platforms, remote controls, multi-media players, or the like, may employ the teachings of the present disclosure and are considered within the scope of the present disclosure.
Application server 116 is coupled to set top box 108 via delivery network 2 (112) where application server 1 16 can be configured to run at least one application that responds to various calls (requests) from set top box 108. That is, set top box 108 can operate as a client while application server 116 operates a server and can run various such applications such as a web server, database, and the like. The calls can be implemented using a framework such as HTML, JAVA, an AJAX framework, ASP (ACTIVE SERVER PAGES), and the like where application server 116 responds to received calls or functions from set top box 108 as to deliver data. Likewise, the application server 1 16 can run off of a Representational State Transfer (REST) model or from a Simple Object Access Protocol (SOAP). In a JAVA based environment, application server 1 16 can be implemented using a Glassfish based server using a Jersey based Application Program Interface which receives various calls from set top box 108, although alternative servers and APIs can be used.
A second model can also be used were application server 1 16 and set top box 108 are configured to operate in a peer-to-peer environment where applications are run in a decentralized manner.
Turning now to FIG.2, a block diagram of an embodiment of the core of a set top box/digital video recorder 200 is shown. The device 200 shown may also be incorporated into other systems including display device 114. In addition, device 200 can be implemented where display device 214 is a touch screen device so that the display 214 can be used for inputting commands or gestures for controlling the operation of device 200. In any of these cases, several components necessary for complete operation of the system are not shown in the interest of conciseness, as they are well known to those skilled in the art.
In the device 200 shown in FIG. 2, the content is received in an input signal receiver 202. The input signal receiver 202 may be one of several known receiver circuits used for receiving, demodulation, and decoding signals provided over one of the several possible networks including over the air, cable, satellite, Ethernet, fiber and phone line networks. The desired input signal may be selected and retrieved in the input signal receiver 202 based on user input provided through a control interface (not shown). The decoded output signal is provided to an input stream processor 204. The input stream processor 204 performs the final signal selection and processing, and includes separation of video content from audio content for the content stream. The audio content is provided to an audio processor 206 for conversion from the received format, such as compressed digital signal, to an analog waveform signal. The analog waveform signal is provided to an audio interface 208 and further to the display device 1 14 or an audio amplifier (not shown). Alternatively, the audio interface 208 may provide a digital signal to an audio output device or display device using a High-Definition Multimedia Interface (HDMI) cable or alternate audio interface such as via a Sony/Philips Digital Interconnect Format (SPDIF). The audio processor 206 also performs any necessary conversion for the storage of the audio signals.
The video output from the input stream processor 204 is provided to a video processor 210. The video signal may be one of several formats. The video processor 210 provides, as necessary a conversion of the video content, based on the input signal format. The video processor 210 also performs any necessary conversion for the storage of the video signals.
A storage device 212 stores audio and video content received at the input. The storage device 212 allows later retrieval and playback of the content under the control of a controller 214 and also based on commands, e.g., navigation instructions such as fast-forward (FF) and rewind (Rew), received from a user interface 216. The storage device 212 may be a hard disk drive, one or more large capacity integrated electronic memories, such as static random access memory, or dynamic random access memory, or may be an interchangeable optical disk storage system such as a compact disk drive or digital video disk drive. In one embodiment, the storage device 212 may be external and not be present in the system.
The converted video signal, from the video processor 210, either originating from the input or from the storage device 212, is provided to the display interface 218. The display interface 218 further provides the display signal to a display device of the type described above. The display interface 218 may be an analog signal interface such as red-green-blue (RGB) or may be a digital interface such as high definition multimedia interface (HDMI). It is to be appreciated that the display interface 218 will generate the various screens for presenting the search results in a three dimensional array as will be described in more detail below.
The controller 214 is interconnected via a bus to several of the components of the device 200, including the input stream processor 202, audio processor 206, video processor 210, storage device 212, and a user interface 216. The controller 214 manages the conversion process for converting the input stream signal into a signal for storage on the storage device or for display. The controller 214 also manages the retrieval and playback of stored content/media assets and the communications with remote servers to playback content/media assets. Furthermore, as will be described below, the controller 214 performs searching of content, either stored or to be delivered via the delivery networks described above. The controller 214 is further coupled to control memory 220 (e.g., volatile or non-volatile memory, including random access memory, static RAM, dynamic RAM, read only memory, programmable ROM, flash memory, EPROM, EEPROM, etc.) for storing information and instruction code for controller 214. Further, the implementation of the memory may include several possible embodiments, such as a single memory device or, alternatively, more than one memory circuit connected together to form a shared or common memory. Still further, the memory may be included with other circuitry, such as portions of bus communications circuitry, in a larger circuit.
To operate effectively, the user interface 216 of the present disclosure employs an input device that moves a cursor around the display, which in turn causes the content to enlarge as the cursor passes over it. In one embodiment, the input device is a remote controller, with a form of motion detection, such as a gyroscope or accelerometer, which allows the user to move a cursor freely about a screen or display. In another embodiment, the input device is a controller in the form of touch pad or touch sensitive device that will track the user's movement on the pad, on the screen. In another embodiment, the input device could be a traditional remote control with direction buttons.
Turning now to FIG. 3, the user interface process of the present disclosure employs an input device that can be used to express functions, such as fast forward, rewind, etc. To allow for this, a tablet or touch panel device 300 (which is the same as the touch screen device 1 16 shown in FIG.l and/or is an integrated example of media device 108 and touch screen device 116) may be interfaced via the user interface 216 and/or touch panel interface 222 of the receiving device 200. The touch panel device 300 allows operation of the receiving device or set top box based on hand movements, or gestures, and actions translated through the panel into commands for the set top box or other control device. In one embodiment, the touch panel 300 may simply serve as a navigational tool to navigate the grid display. In other embodiments, the touch panel 300 will additionally serve as the display device allowing the user to more directly interact with the navigation through the grid display of content. The touch panel device may be included as part of a remote control device containing more conventional control functions such as activator buttons. The touch panel 300 can also includes at least one camera element. As described in further detail below, content displayed on the touch panel device 300 may be zapped or thrown to the main screen (e.g., display device 114 shown in FIG. 1).
Turning now to FIG. 4, the use of a gesture sensing controller or touch screen, such as shown, provides for a number of types of user interaction. The inputs from the controller are used to define gestures and the gestures, in turn, define specific contextual commands. The configuration of the sensors may permit defining movement of a user's fingers on a touch screen or may even permit defining the movement of the controller itself in either one dimension or two dimensions, two-dimensional motion, such as a diagonal, and a combination of yaw, pitch and roll can be used to define any three-dimensional motion, such as a swing. A number of gestures are illustrated in FIG. 4. Gestures are interpreted in context and are identified by defined movements made by the user.
Bumping 420 is defined by a two-stroke drawing indicating pointing in one direction, either up, down, left or right. The bumping gesture is associated with specific commands in context. For example, in a TimeShifting mode, a left-bump gesture 420 indicates rewinding, and a right-bump gesture indicates fast-forwarding. In other contexts, a bump gesture 420 is interpreted to increment a particular value in the direction designated by the bump. Checking 440 is defined as in drawing a checkmark. It is similar to a downward bump gesture 420. Checking is identified in context to designate a reminder, user tag or to select an item or element. Circling 440 is defined as drawing a circle in either direction. It is possible that both directions could be distinguished. However, to avoid confusion, a circle is identified as a single command regardless of direction. Dragging 450 is defined as an angular movement of the controller (a change in pitch and/or yaw) while pressing a button (virtual or physical) on the tablet 300 (i.e., a "trigger drag"). The dragging gesture 450 may be used for navigation, speed, distance, time- shifting, rewinding, and forwarding. Dragging 450 can be used to move a cursor, a virtual cursor, or a change of state, such as highlighting outlining or selecting on the display. Dragging 450 can be in any direction and is generally used to navigate in two dimensions. However, in certain interfaces, it is preferred to modify the response to the dragging command. For example, in some interfaces, operation in one dimension or direction is favored with respect to other dimensions or directions depending upon the position of the virtual cursor or the direction of movement. Nodding 460 is defined by two fast trigger-drag up-and-down vertical movements. Nodding 460 is used to indicate "Yes" or "Accept." X-ing 470 is defined as in drawing the letter "X." X-ing 470 is used for "Delete" or "Block" commands. Wagging 480 is defined by two trigger-drag fast back-and-forth horizontal movements. The wagging gesture 480 is used to indicate "No" or "Cancel."
Depending on the complexity of the sensor system, only simple one dimensional motion or gestures may be allowed. For instance, a simple right or left movement on the sensor as shown here may produce a fast forward or rewind function. In addition, multiple sensors could be included and placed at different locations on the touch screen. For instance, a horizontal sensor for left and right movement may be placed in one spot and used for volume up and down, while a vertical sensor for up and down movement may be place in a different spot and used for channel up/down. In this way specific gesture mappings may be used. As discussed in further detail below, a two finger swipe gesture may be utilized to initiate the throwing or moving of content from the tablet 300 to the main screen or display device 1 14.
Fig. 5 presents a generic diagram of an exemplary application server 500 in accordance with the present disclosure that can be used with delivery networks 106, 112 to provide services to devices on such networks. Application server 500 can contain a storage 510 which is used to store information such as applications, data for such applications, database information, web page information, media data, and the like. Storage 510 can be implemented as one or more hard drives, a RAID system, volatile memory, non-volatile memory, disc based storage, solid state storage, reel to reel tape, and the like. Operating system 520 is the underlying system that is used to operate server 500. Examples of operating systems that can be used are Windows, Solaris, Unix, Linux, MacOS, and the like. Applications 530 are applications such as a web server, file server, application licensing, search engine, authentication, telenet, e-mail, file transfer, media delivery, media asset information program, program code execution, and the like that can be run on application server 500.
Communication interface 540 provides the interface for server 500 to communicate with devices on network 106, 112. Examples of communication interfaces are Ethernet, Satellite Interface, Fiber Connection, Tl , T2, T3, Coaxial Cable, and the like, where such communications are preferably packet based, although other systems of data delivery can be used. When a user is interested in finding out about media assets, a user can use an interface as shown in FIG. 6 which represents an exemplary embodiment of a search user interface 600 to find out information about different media assets. Specifically, user interface 600 is used as a front end so that a search query is typed into area 620, whereby the entered search query is delivered to a search engine for search results, as known in the art. That is, a query can be made by a device to a remote search engine for search results such a GOOGLE, YAHOO, BING and the like, a search can be conducted for search results in a local storage device, multiple search engines can be queried at the same time for search results, and other techniques can be used in accordance with the present description. User interface 600 presents a display area 610 that is viewable on a display device, display screen, and the like. A search function can be called up by a user activating the search control 605, whereby the results of such an action brings up search user interface 600. Area 620 is used to enter in a search criteria by using the characters shown in keyboard 630. Button 640 can be activated to bring up other searches made by other users that are popular where the search information can be obtained from a remote server. Button 645, when activated, brings up searches that were previously made by a user.
In this present example, a user has typed in the terms "how to train your dragon" into area 620. The results of this search query are shown in FIG. 7 that illustrates an exemplary embodiment of user interface 700 presenting search results. That is, search results 650, 653, and 655 present different versions of the movie "How to Train Your Dragon" where 650 comports to a regular version of the movie, 653 corresponds to a 3D version, and 655 relates to an IMAX version of the movie. A movie can be selected by activating a play button 659 that is next to a search result. Ideally, play button 659 will be generated if a user has already purchased a media asset, has access to a media asset through a subscription, has a version of a media asset accessible through local or remote storage, and the like. Alternatively, a media asset can be purchased if a user activates an option such as options 657. In addition, descriptive information about a media asset can be accessed by activating option 657, as well.
FIG. 8 illustrates an exemplary embodiment of user interface 800 presenting descriptive information about a media asset in accordance with the present disclosure. Examples of descriptive information can be poster art, synopsis of the asset, director, actor, studio, television information, movie information, music information, music artists, artists, producers, director of photography, screenplay authors, executive producers, producers, genre, date of release, and the like. Such descriptive information can be retrieved from sources as Baseline, Tribune Media Services (TMS), Internet Movie Database (IMDB), Internet Video Archive (IV A), locally from a device, electronic program guide information, rating, runtime, release data, and the like can also be accessed to produce such information as well. Descriptive information can also be classified by media asset descriptive information fields including categories such as synopsis of the asset, director, actor, studio, television information, movie information, music information, music artists, artists, producers, director of photography, screenplay authors, executive producers, producers, genre, date of release, and the like.
Descriptive information media assets are video, audios, pictures, accompanying text, computer programs, and the like that can be called up as part of descriptive information when one seeks information about a media asset. For example, if a user is searching out descriptive information for a movie as a media asset, pictures of the actors in the movie, trailers of the movie, sound from an interview with a director from the movie, and the like all represent different forms of descriptive information media assets. Referring back to user interface 800, some descriptive information includes poster art 610 and descriptive information text 660 which contains various descriptive information fields including asset title, crew, synopsis of the asset, rating, runtime, studio, and date of release, among other fields. It is noted that display area 610 sometimes only presents part of the information that is available, where user interface 800 presents additional information about the movie asset's director which can be accessed by scrolling display area 610 down. That is, it is expected that descriptive information for a media asset will sometimes be larger than shown in a display area 610, where such information can be located by instituting a scroll operation in an appropriate direction by instituting a gesture, control input, command, and the like. A video trailer of a media asset, as an example of media asset descriptive information, can be accessed by activating play trailer option 662 which will cause the playback of a trailer on a main screen, secondary screen, and the like.
FIG. 9 presents an exemplary embodiment of user interface 900 presenting descriptive information about a media asset in accordance with the present disclosure. Instead of presenting merely text in response to a request for descriptive information for a media asset, user interface 900 presents pictures in a display area 910 with descriptive text for corresponding media asset descriptive fields. For example, aside for asset title bar 920 which shows the name of a media asset, window 925 shows that an interactive application can be called up for the selected media asset. In addition, window 930 indicates that there is additional information about a media asset where such information can be reviews of a media asset by critics, user comments about a media asset, a screenplay of a media asset, and the like.
Window 940 shows cast information, crew information is accessible through window 950, images from a media asset can be accessed through window 960, and videos of a media asset can be accessed through the activation of window 970. Optionally, a graphic background such as 980 can be called up in which windows such as 925, 930, 940, 950, 960, 970, and 980 are placed in the foreground of display area 910. It is expected that other windows for other descriptive information fields can be used in accordance with the present description.
An example of a configuration table that can be used to generate an embodiment of the descriptive user interface as presented in FIG. 9 is shown in TABLE 1. Window Position Size Media Asset Media Source of Background
Descriptive Type Corresponding
Information Descriptive
Field Field Asset
920 (xi, Yi) (W1 , H1) Asset Title Text, Page Server 1 Serverl
925 (X2,Y2) (W2, H2) Interactive Text, Page Server2 Serverl
Applications
930 (X3, Y3) (W3, H3) Additional Text, Page Server 1 Serverl
Information
940 (X4, Y4) (W4, H4) Cast Picture, Server 1 Serverl
Page
950 (X5, Y5) (W5, H5) Crew Picture, Server 1 Serverl
Page
960 (X6, Y6) (W6, H6) Images Picture, Serverl , Serverl
Page Server2
970 (X7, Y7) (W7, H7) Videos Video, Server2, Serverl
Page Serverl ,
Server 3
TABLE 1
TABLE 1 lists that for a rendered window, there is corresponding position and size information. The position information, in this example, is presented as x and y coordinates where the "x" is measured from the top left part of a display area, as shown in FIG. 9, where the "x" value should be positive and indicates a movement towards the right in accordance with a specified unit (pixels, inches, centimeters, and the like). The "y" value is measured from the top left part of a display area, as shown in FIG. 9, where the "y" value should be positive and is a movement down in accordance with a specified unit (pixels, inches, centimeters and the like). The width value "w" is the size of a window in the x direction which begins at a specified "x" coordinate and moves towards the left in a specified unit (for example, a window is 100 pixels wide, .5 inches wide, 3.23 cm wide, and the like). The height value is the size of a window in the y direction which begins at a specified "y" coordinate for a window and moves down in a specified unit (for example, a window is 400 pixels long, .75 inches high, 8 cm high, and the like). It is noted, that the construction of a window can take other forms and use other units. The background field indicates that a background can be called from a specific server or the term 'none' is used when no background exists.
Continuing with the description of TABLE I, the field called Media Asset Descriptive Field indicates a field of descriptive information that is accessible by the activation of a corresponding window. Media type indicates what is to be shown in a window that corresponds to a descriptive information field which can be a video, audio, picture, page, computer program, text, combination thereof, and the like. That is, when a window is activated, a corresponding descriptive information media asset which corresponds to a media asset can be accessed. For example, if a media asset has a corresponding descriptive media asset that is a trailer in the form of a video, such a trailer can be called and activated for playback if a corresponding window is activated. The server location of a corresponding descriptive media asset can be specified in this field. It is noted, that for some media asset descriptive field that there may be a priority assigned to the source such descriptive field media assets. For example, a ranking of sources can be that a descriptive field media asset is first to come from IVA, then TMS, and then a local server. This prioritization of servers for information can be designated by a user, service provider, and the like by entering in relevant information. Other examples of this prioritization of servers can be implemented in accordance with the present description.
It is noted that in the field of media type, there can be reference to an additional page which would be set up in a similar way as TABLE I where additional windows, media assets, and the like can be described. Typically, the identifier in the table called "page" will indicate that the presence of an additional page as shown for FIGS. 10 and 11 as described below. For example, the term "page" with another asset type "picture" in the media type field would indicate that a picture should be shown for a corresponding window, but a page of more windows is typically generated when the window is activated. Alternatively, the term "page" with "video" would have a video playing in a window whereby the referenced page is generated when the corresponding window is activated. That is, when a term such as "video", "audio", "text", "interactive program", and "picture" appear without the term "page" in a table for a window, a corresponding page will not be generated by the activation of a window. Other approaches are possible in accordance with the described principles. When requesting a descriptive information media asset from a server, it is possible that such an asset will be specified by a file name in a configuration table, file, database, or other manner. It is contemplated that sometimes there will not be a specific file referenced in a request for a descriptive information media asset, but rather a query for an asset can be made to a search engine residing a server. An example of this approach for a movie trailer would present a query "(movie trailer) and (asset title)" to a search engine at the server. If a search result is available from the search query, the search engine will return a link to a video asset that corresponds to a movie trailer for the asset title. This scheme can be applied for obtaining video, audio, pictures, text, applications, and the like. In addition, if a first server is queried and is incapable of providing a link to a descriptive information media asset, a second server can be queried in accordance with the priority information indicated in a table and the like. Other techniques for receiving descriptive information media assets that correspond to media assets can be implemented in accordance with the described principles.
FIG. 10 illustrates an exemplary embodiment of a user interface 1000 presenting descriptive information and descriptive information media assets that are associated with a media asset in accordance with the present disclosure. Specifically, as shown in display area 1010, windows 1020, 1025, 1030, 1035, 1040, 1045, 1050, and 1055 correspond to different actors that are associated with the descriptive field "cast". In the present case, the activation of any of these windows can bring up an additional page of descriptive information about a specific actor. Such a process can be continued indefinitely in accordance with the present description. A descriptive information media asset can also played back when such a window is activated, as well.
A further example of how FIG. 10 can be rendered is shown in TABLE II and whereby such information can be rendered in accordance with the present description. Window Position Size Descriptive Media Source of Background
Information Type Corresponding
Media Asset Descriptive
Field Asset
1020 (xi, Yi) (W1 , H1) Cast; Craig Picture, Server 1 None
Ferguson Page
1025 (X2,Y2) (W2, H2) Cast; Picture, Server2 None
Page
Kristen
Whig
1030 (X3, Y3) (W3, H3) Cast; Jay Video Server 1 None
Baruchel
1035 (X4, Y4) (W4, H4) Cast; Picture, Server 1 None
Page
America
Ferrara
1040 (X5, Y5) (W5, H5) Cast; Video Server2, None
Christopher
Server 1
Mintz-
Plasse
1045 (X6, Y6) (W6, H6) Cast; T.J. Picture, Serverl , None
Page
Miller Server2
1050 (X7, Y7) (W7, H7) Cast; Gerald Picture, Server 3 None
Page
Butler
1055 (X8, Y8) (W8, H8) Cast; Jonah Picture, Server 2 None
Page
Hill
FIG. 11 illustrates an exemplary embodiment of user interface 1100 presenting video trailers as descriptive information media assets that are associated with a media asset in accordance with the present disclosure. In this example, display area 1 120 displays a number of screenshots for video trailers 1120, 1 125, 1130, 1135, 1140, 1145, and 1150 which can be playback by activating any of the corresponding windows for such video trailers. As previously described, a video trailer can be accessed from a specified server for playback on a device in accordance with the present principles.
FIG. 12 illustrates an exemplary embodiment of a user interface 1200 for presenting descriptive information in accordance with the present disclosure. For a display area 1210, several windows 1220, 1230, 1240, 1250, 1260, 1270, 1280, and 1290 are presented. Notably, part of windows 1250, 1260, and 1290 reside off screen of display area 1210, where a scrolling operation can be employed to completely bring such windows into view. As stated earlier, each window has a corresponding descriptive information field or descriptive information media asset that is callable using the present principles. The elements of FIG. 12 can be customized by a user where the various windows can be moved around, re-sized, added, or deleted from such a display area using gesture, control input commands, dragging and dropping, and the like. That is if a user wants to add a new window to a user interface for "crew" information, such an addition changes the attributes of a configuration file, table, database, and the like.
FIG. 13 illustrates an exemplary embodiment of a user interface 1300 for presenting descriptive information that can be modified by a user moving a component of the user interface in accordance with the present disclosure. For example, if a user were to move a window 1230 to the position at 1235, the corresponding "x" and "y" coordinates for the window change.
Likewise, the movement of window 1290 to position 1295 changes the corresponding "x" and "y" coordinates for window 1290, as well. Such changes are stored in the relevant configuration file, table, database, and the like. A user interface 1400 displaying the results of the movement operation is shown in FIG. 14 in accordance with the present disclosure. FIG. 15 illustrates an exemplary embodiment of a user interface 1500 for presenting descriptive information that can be modified by a user using a resizing operation in accordance with the present disclosure. For a display area 1210, a window 1230 is resized to occupy the area 1235. Such a resizing operation for scaling the contents of a video window is known in the art. The results of the resizing operation will change the corresponding "w" and "h" for a window where such changes are stored in a relevant configuration file, table, database, and the like. User interface 1600 displaying the results of the resizing operation is shown in FIG. 16 in accordance with the present disclosure.
FIG. 17 illustrates an exemplary embodiment of a user interface 1700 for presenting descriptive information that can be modified by a user in accordance with the present disclosure. For a display area 1210, a modified version of FIG. 12 is shown which contains a fewer of elements from user interface 1200. Specifically, windows 1230, 1240, 1250, 1260, 1270, and 1290 have been rearranged in a manner as described for the present principles. In addition, windows 1220 and 1280 have been removed from user interface 1200 to yield user interface 1700.
FIG. 18 illustrates an exemplary embodiment of a user interface 1800 for presenting descriptive information that can be modified by a user in accordance with the present disclosure. In this arrangement, windows 1230, 1240, 1250 and 1260 are grouped together. Window 1290 has been moved to another area of display area 1210. It is noted, that window 1270 is deleted from this configuration. Likewise, FIG. 19 illustrates an alternative arrangement of windows 1230, 1240, 1250, 1260, and 1290 for a user interface 1900 in accordance with the present disclosure. FIG. 20 presents an exemplary embodiment of a block diagram 2000 with various servers that can be designated for receiving descriptive information media assets for a device. In accordance with the ability to specify what servers should be accessed or queried for descriptive information field media assets, servers 2010, 2020, and 2030 represent servers such as IVA, TMS, and the like. Server 2040 is a server operated by a network service provider 2050 where the service provider provides network service to a device 2060. A service provider 2050 may want their corresponding server 2040 to have a higher priority than other servers 2010, 2020, and 2030 because service provider 2050 can derive an economic benefit via advertising or attempting to sell a media asset through their server. For example, network service provider 2050, in response to a query for descriptive information for a media asset, can present a trailer (as a descriptive information field media asset) which allows a user device 2060 to request the media asset from the service provider 2050 through a video on demand service, This prevents outside services such as Amazon, Itunes, Netflix, IVA, IMDB, and the like from upselling a user through their information services. Alternatively, a local storage device 2070 coupled to a user's device 2060 can have descriptive information and descriptive information media assets available, as well. Such priority information can be designated in a table, database, configuration file, and the like.
Media assets and their corresponding descriptive information media assets can be called by using an application server 118, where various sources (files, components of media files, and the like) are constructed where a media asset will have a resource for each category that can be generated. For example, a "cast" resource will correspond to the "cast and crew" information that is shown. Likewise, for the "video" category, there will be a listing of videos under the "content video" resource. Resource can be implemented with text, video, pictures, JAVA code, HTML code, and the like. The various calls between an application server running a REST service and the resources on an asset server are shown below. getContentVideos:
This service takes a JSON request with contentld parameter, and returns JSON response with the IVA (Internet Video Archive) trailer URLs ingested into the database during the import process. getContentlmages:
This service takes a JSON request with contentld parameter, and returns JSON response with the corresponding Image URLs returned from the database or the JSON response available in the JSON response file in the following location. \opt\Spectrum\catalogs\backstage\contentimages
getContentNotes:
This service takes a JSON request with contentld parameter, and returns JSON response with the corresponding Image URLs returned from the database or the JSON response available in the JSON response file in the following location. \opt\Spectrum\catalogs\backstage\contentnotes getPersonVideos:
This service consumes a JSON request with personld parameter, and returns JSON response by reading the JSON response file from the following location. \opt\Spectrum\catalogs\backstage\person\videos getPersonlmages :
This service consumes a JSON request with personld parameter, and returns JSON response by reading the JSON response file from the following location. \opt\Spectrum\catalogs\backstage\person\images getHomePage:
This service consumes a JSON request with contentld parameter, and returns JSON response by reading the JSON response file from the following location. \opt\Spectrum\catalogs\backstage\homepage
The generation of various URLs which are implementations of the various commands above to obtain resources are shown in TABLE 3.
Figure imgf000024_0001
ackstage/contentnotes/ {conten om/TDCD/spectrurn/Engin tld} eering/Backend/DetailedDe sign/Catalog/BackstageRes ponse/5 MV96e 1 ffi2670a89 bcde523f427a8f2061a- contentnotes.ison
GetPersonVideos http ://<servername> :<port>/b https://access.technicolor.c ackstage/personvideos/ {perso orn TDCD/spectrom/Engin nld} eering/Backend/DetailedDe sign/Catalog/BackstageRes
Donse/5PER079a5eblea8fl)
4deb6f999b416£207a3- personvideos.ison
GetPersonlmages http ://<servername> :<port>/b https://access.techmcol.or.c ackstage/personimages/ {perso om/TDCD/spectrum/Engin nld} eering/Backend/DetailedDe si gn/Catalo g/BackstageRes ponse 5PER079a5ebl ea8fl)
4deb6f999b416fi207a3- personimages.ison
GetHomePage http ://<serv ername> : <port>/b https://access.technicolor.c ackstage/homepage/ {contenti om/TDCD/spectrum/Engin d} eering/Backend/DetailedDe sign/Catalo g/BackstageRes Donse 1 MV96e 1 ff2670a89 bcde523f427a8£206 la- homepage, json
GetCast http ://<servername> :<port>/b https://access.technicolor.c ackstage/cast/ {contentld} om/TDCD/spectrurn/Engin eering/Backend/DetailedDe sign/Catalog/BackstageRes ponse/5 MV96e 1 ffi2670a89 bcde523f427a8f206 la- cast, ison
GetCrew htt : //<servername> : <port>/b https://access.technicolor.c ackstage/crew/ {contentld} orn TDCD/spectrom/Engin eering/Backend/DetailedDe sign/Catalog/BackstageRes ponse/5 MV96e 1 ff2670a89 bcde523f427a8f206 la- crew, ison
GetBioPage http ://<servername> :<port>/b https://access.technicolor.c ackstage/biopage/ {personld} om/TDCD/spectrorn/Engin eering/Backend/DetailedDe sign/Catalog/BackstageRes
Donse/5PER079a5eblea8f0
4deb6f999b416£207a3- bio.json
TABLE 3
The creation of the resources described above can come from a variety of sources and data collection methods.
An embodiment of the disclosure can also be implemented in the following manner, as well. A homepage such as presented in FIG. 12 is generated in response to a user requesting more information about a media asset. The homepage ideally displays a number of descriptive information fields for a user to select from. The generation of the homepage is from various resources. For example, when producing the home page, the images that are used to populate the homepage come from Baseline. One can construct the homepage where the same categories are used for each homepage regardless of the media asset being queried. In the present example, presented Figure, there six categories, although any number of categories can be selected for the homepage. For the category "image", there can be a number of different images available which also have title and description information when such information is available from an external database or other type of information source. When an image is selected, an image previewer application or slideshow can be activated which shows the selected image and other images that are related to a selected media asset which are also in the "image" category. Optionally, a background image from a database can also be used for a homepage, although any other background graphic can be made available. It is also considered that the vendors of various images and information that are used to construct a homepage and the underlying pages beneath can be replaced depending on the requirements of a network service provider who implements the described disclosure.
When accessing the category "video", resources as descriptive information media assets such as movie trailers, actor interviews, crew interviews, documentaries, commercials about the media asset, and the like are available for playback using a program such as FLASH,
QUICKTIME, MICROSOFT MEDIA PLAYER, H.264 Media Player, and the like is used. Such "video" resources can come from a database such as IVA which contains the videos, video thumbnails, title, and description information, although other databases can be used to retrieve such information. When the "cast and crew" category is accessed, each member of the cast is presented with image, actor name, and a corresponding character name, if such information is available. For example, if an image is available for an actor, the image, actor name, and character name is displayed. If an image is not available, the actor name and character name is shown (if such information is available) otherwise, the application will use an alternative or default image.
When a particular actor is selected, an example of the information which is available for the actor such as the actor's date of birth, date of death, and other biographical information can be displayed if such information is available from a database such as Baseline, IMDB, and the like. Ideally, 4 videos and 5 images are displayed for a particular actor, but the number of images and video can be changed based on the requirements of a network service provider, user, and the like in accordance with the present principles. The selection of videos and images is described above. Optionally, the source of videos or images can come as the results from a search from a service such as YouTube, Google, and the like, or videos can be pre-populated due to a manual operation. When the "crew" information is accessed, each member of the crew will have their name listed, with crew role, and the image of the crew member. If an image of a crew member is not available, a default image will be displayed. The selection of a particular crew member will bring up a biography page of the crew member when such information is available.
Optionally, an "additional notes" category can be implemented using information for a media asset such as country of origin of the media asset, filming locations, box office gross, total sales of media (such as digital downloads, DVDs, Blu-Rays, streaming media, and the like), awards, sound track information, information how to buy the soundtrack or songs used for the media asset from a source like AMAZON, ITUNES, and the like. If such information is not available, the additional notes category should not be displayed.
FIG. 21 is a flowchart of a process 2100 for modifying a user interface in accordance with the present disclosure. In step 21 10, a user interface is generated in response to a user command where the user interface should contain multiple windows. As described before, this user interface is generated because a user requests additional descriptive information about a media asset. When one of these windows of the user interface is activated, descriptive information is presented being either a descriptive information media asset which is related to the media asset or information pertaining to a media asset descriptive information field is presented. That is the activation of a window either displays a different descriptive information media asset (a first trailer, a second trailer, an audio interview, etc.,) and/or a media asset descriptive information field that generates an additional page of information as described in accordance with present disclosure.
The generation of the user interface can be determined by a configuration file, table, database, local storage, remote storage, and the like which indicates how many windows are to be generated, the position of such windows, the location of such windows, and whether such windows correspond to a descriptive information media asset and/or a media asset descriptive information field. The configuration information can also be referenced to indicate the source from which descriptive information should be retrieved from where such sources include remote servers and local storage. A user can modify the presentation of such windows in step 2120, where modifications can be the resizing of a window in step 2130, a changing in the position of a window in step 2140, an addition or a removal of a window in step 2150, and the like. As an optional step, the location where descriptive information can be retrieved can be prioritized by using a hierarchical listing of servers. Such a step can take place when a service provider wants to use their own servers being used instead of using other servers.
In step 2160, whatever modifications that are made to a user interface are stored in memory which includes information containing the change in window location, change in window size, the addition or removal of windows, and the like. In addition, this step includes that the presentation of descriptive information can change because of the addition or removal of a window. For example, the addition of a window which corresponds to "cast" will have a window for the media asset descriptive information field "cast" generated in the user interface next time a user requests descriptive information for a media asset. Likewise, the addition of another window can up the number of trailers shown in a display area. The removal of window would therefore eliminate a corresponding descriptive information field or descriptive media asset from the display of the user interface next time a user requests descriptive information for a media asset. Hence, the subsequent presentation of what descriptive information is presented is changed by such modifications. Other modifications can be implemented in accordance with the present principles.
It should be understood that the elements shown in the FIGS, may be implemented in various forms of hardware, software or combinations thereof. Preferably, these elements are implemented in a combination of hardware and software on one or more appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces.
The present description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its scope.
All examples and conditional language recited herein are intended for informational purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes that can be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. The computer readable media and code written on can be implemented in a transitory state (signal) and a non- transitory state (e.g., on a tangible medium such as CD-ROM, DVD, Blu-Ray, Hard Drive, flash card, or other type of tangible storage medium). The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" or "controller" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor ("DSP") hardware, read only memory ("ROM") for storing software, random access memory ("RAM"), and nonvolatile storage.
Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
Although embodiments which incorporate the teachings of the present disclosure have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings. It is noted that modifications and variations can be made by persons skilled in the art in light of the above teachings.

Claims

1. A method for customizing a user interface comprising the steps of: generating a user interface containing a plurality of windows in response to a request for descriptive information for a media asset; and storing information indicating a modification to the plurality of windows affecting at least one window from the plurality of windows, wherein the modification changes a subsequent display of the descriptive information when the at least one window is activated.
2. The method of Claim 1 , wherein the modification to the at least one window is movement of the at least one window to a different area in a display area.
3. The method of Claim 1 , wherein the modification to the at least one window is the
resizing of the at least one window to a different size.
4. The method of Claim 1 comprising an additional step of adding a window to the plurality of windows, where the added window results in the descriptive information displaying at least one of: a new descriptive information media asset and a new media asset descriptive information field corresponding to the added window.
5. The method of Claim 1 comprising an additional step of removing a window from the plurality windows, where the display of the descriptive information is affected by removing from subsequent display at least one of a descriptive media asset and a media asset descriptive information field which corresponding to the removed window.
6. The method of Claim 1 , wherein said descriptive information comprises at least one of a descriptive information media asset and a media asset descriptive information field.
7. The method of Claim 6, where the at least one of a descriptive information media asset comprises at least one of a video, audio asset, text, picture, and computer program.
8. The method of Clam 6, wherein the at least one of a media asset descriptive information field comprises at least one of synopsis of the asset, director, actor, studio, television information, movie information, music information, music artists, artists, producers, director of photography, screenplay authors, executive producers, producers, genre, and date of release.
9. The method of Claim 1 comprising an additional step of retrieving descriptive
information from a selected server in accordance with a server priority.
10. The method of Claim 1 comprising an additional step of submitting a query for
descriptive information to a search engine at a second server when a first server fails to return descriptive information.
PCT/US2011/066574 2010-12-22 2011-12-21 Method for customizing the display of descriptive information about media assets WO2012088307A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP11850945.4A EP2656176A4 (en) 2010-12-22 2011-12-21 Method for customizing the display of descriptive information about media assets
CN201180062511.0A CN103270473B (en) 2010-12-22 2011-12-21 For customizing the method for display about the descriptive information of media asset
KR1020137019254A KR20140020852A (en) 2010-12-22 2011-12-21 Method for customizing the display of descriptive information about media assets
BR112013016163A BR112013016163A2 (en) 2010-12-22 2011-12-21 method for custom display of descriptive media asset information
JP2013546380A JP6078476B2 (en) 2010-12-22 2011-12-21 How to customize the display of descriptive information about media assets

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201061426137P 2010-12-22 2010-12-22
US61/426,137 2010-12-22

Publications (1)

Publication Number Publication Date
WO2012088307A1 true WO2012088307A1 (en) 2012-06-28

Family

ID=46314456

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/066574 WO2012088307A1 (en) 2010-12-22 2011-12-21 Method for customizing the display of descriptive information about media assets

Country Status (6)

Country Link
EP (1) EP2656176A4 (en)
JP (1) JP6078476B2 (en)
KR (1) KR20140020852A (en)
CN (1) CN103270473B (en)
BR (1) BR112013016163A2 (en)
WO (1) WO2012088307A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430447B1 (en) 2013-05-20 2016-08-30 Google Inc. Presenting media content based on parsed text
US10595094B2 (en) 2013-09-10 2020-03-17 Opentv, Inc. Systems and methods of displaying content
CN112004164A (en) * 2020-07-02 2020-11-27 中山大学 Automatic generation method of video poster

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10771837B2 (en) 2016-09-07 2020-09-08 Rovi Guides, Inc. Systems and methods for presenting background graphics for media asset identifiers identified in a user defined data structure
CN111372109B (en) * 2019-11-29 2021-05-11 广东海信电子有限公司 Intelligent television and information interaction method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040177063A1 (en) * 2003-03-06 2004-09-09 Weber Barry Jay Simplified searching for media services using a control device
US20060123052A1 (en) * 2004-10-25 2006-06-08 Apple Computer, Inc. Online purchase of digital media bundles having interactive content
US20060236221A1 (en) * 2001-06-27 2006-10-19 Mci, Llc. Method and system for providing digital media management using templates and profiles

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2282540A3 (en) * 1995-10-02 2013-05-15 Starsight Telecast, Inc. Systems and methods for providing television schedule information
US6804825B1 (en) * 1998-11-30 2004-10-12 Microsoft Corporation Video on demand methods and systems
JP2001195420A (en) * 2000-01-06 2001-07-19 Hitachi Maxell Ltd System and method for supporting musical information use
US20040068536A1 (en) * 2000-07-14 2004-04-08 Demers Timothy B. Multimedia player and browser system
CN1475081A (en) * 2000-10-11 2004-02-11 联合视频制品公司 System and method for supplementing on-demand media
JP2004007323A (en) * 2001-06-11 2004-01-08 Matsushita Electric Ind Co Ltd Television broadcast receiver
JP2003203035A (en) * 2002-01-07 2003-07-18 Digital Dream:Kk Information delivery method and information delivery system, information delivery program, editing program and computer-readable storage medium
JP2003271625A (en) * 2002-03-12 2003-09-26 Toshiba Eng Co Ltd Network retrieval system and method
JP2005085102A (en) * 2003-09-10 2005-03-31 Canon Inc Guarantee system
KR101167827B1 (en) * 2004-01-16 2012-07-26 힐크레스트 래보래토리스, 인크. Metadata brokering server and methods
US7925973B2 (en) * 2005-08-12 2011-04-12 Brightcove, Inc. Distribution of content
US8464177B2 (en) * 2006-07-26 2013-06-11 Roy Ben-Yoseph Window resizing in a graphical user interface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060236221A1 (en) * 2001-06-27 2006-10-19 Mci, Llc. Method and system for providing digital media management using templates and profiles
US20040177063A1 (en) * 2003-03-06 2004-09-09 Weber Barry Jay Simplified searching for media services using a control device
US20060123052A1 (en) * 2004-10-25 2006-06-08 Apple Computer, Inc. Online purchase of digital media bundles having interactive content

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2656176A4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430447B1 (en) 2013-05-20 2016-08-30 Google Inc. Presenting media content based on parsed text
US10595094B2 (en) 2013-09-10 2020-03-17 Opentv, Inc. Systems and methods of displaying content
US10992995B2 (en) 2013-09-10 2021-04-27 Opentv, Inc. Systems and methods of displaying content
US11363342B2 (en) 2013-09-10 2022-06-14 Opentv, Inc. Systems and methods of displaying content
US11825171B2 (en) 2013-09-10 2023-11-21 Opentv, Inc. Systems and methods of displaying content
CN112004164A (en) * 2020-07-02 2020-11-27 中山大学 Automatic generation method of video poster
CN112004164B (en) * 2020-07-02 2023-02-21 中山大学 Automatic video poster generation method

Also Published As

Publication number Publication date
CN103270473B (en) 2016-03-16
EP2656176A4 (en) 2016-08-10
EP2656176A1 (en) 2013-10-30
KR20140020852A (en) 2014-02-19
JP2014505930A (en) 2014-03-06
BR112013016163A2 (en) 2019-07-30
CN103270473A (en) 2013-08-28
JP6078476B2 (en) 2017-02-08

Similar Documents

Publication Publication Date Title
KR101718533B1 (en) Apparatus and method for grid navigation
US10514832B2 (en) Method for locating regions of interest in a user interface
US20140150023A1 (en) Contextual user interface
US9990394B2 (en) Visual search and recommendation user interface and apparatus
US10275532B2 (en) Method and system for content discovery
JP6078476B2 (en) How to customize the display of descriptive information about media assets
US20090113507A1 (en) Media System for Facilitating Interaction with Media Data Across a Plurality of Media Devices
US20170220583A1 (en) Method and apparatus for creating arrangements of spaces for virtual reality

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180062511.0

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11850945

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013546380

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2011850945

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2011850945

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20137019254

Country of ref document: KR

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112013016163

Country of ref document: BR

Kind code of ref document: A2

ENP Entry into the national phase

Ref document number: 112013016163

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20130624